I have predicted crop yield as a function of two treatments, each treatment at four levels. The model does not work well, so I use a log transformation. I then calculate least square means and run a Tukey test on these means. I plot a nice graph ...... and get complaints. A common suggestion is to back-transform the results. This seems simple enough but I don't know what the numbers mean. The problem started with typing in a bunch of numbers into Excel. I can then take the mean of these values. I can take the log of each value. I can back-transform the mean(log(value)) and find that it is nothing like the mean of the untransformed values. The cause is that the log transformation changes the distribution of the data. Needless to say back-transforming the LSMeans and SE in the original problem did not seem to work very well either.
There are solutions. For example, present the raw means and standard deviations, along with the multiple comparison results from the model with the transformed data. I have no problem with that.
I can find statistics textbooks that recommend back-transformation. So I was wondering if it was generally accepted. If so, then how does one interpret the result? It seems to me that applying the term "mean" would be misleading, and I do not see figures with labels like "back-transformed(log(value))."