Interesting choice of log transformation. As written by you, you are proposing the true relationship to be Y=K*X^b, where K = exp(a + eps). If yes, then where is the error term here?
If you were to write: Y = A + B*X^b + epsilon, then how would log transformation work? If you propose that the logarithms of Y and X are linear, then in all probability the error will not be normally distributed, so using linear regression would not be correct.
Another question to ask: does log Y and log X make sense as raw variables? If not, please rethink the basic relationship before transformations.
According to Jochen's explanation, the model is a standard regression and all known standard formulas apply. Thus, e.g. if the error e' (in Joachen's notation) is iid normal with mean 0, then there is no bias for the estimator a'^ and b'^ of a' and b', respectively, since they are given as linear function of y'-s
A problems appears when the original model is exponential (which probably is the case of Raksmey) with ADDITIVE iid perturbation:
Z = A exp( B*X) + E
After the operation of log applied to both sides leads to
log Z = log A + log( exp( B*X) + E/A )
which can be written as follows:
log Z = log A + log[ exp( B*X) (1 + E exp(-B*X) / A ) ] = log A + B* X + e,
where e = log( 1 + E exp(-B*X) / A), which are still independent, but not identically distributed (since they depend on X ). One of the ways to handle this case is to introduce e ans normal with mean 0 and variance proportional to exp(- 2B*X), where B should be approxiamtely estimated by a model with e iid normal. This step can be repeated with the new value of B, etc.