01 January 1970 4 9K Report

I am creating an ELISA curve fitting software from scratch and one thing I noticed is that when R^2 are calculated by the typical method (sum of squares of residual between expected y and real y), the curve only has to get the few points with the highest readings right to get a very high R square value. That's because curve is serial diluted so the points with lower readings are magnitudes lower in value compared to the high points. So even if the lower points' expected values miss the actual values by a lot percentage wise, the absolute impact on the overall "error" (statistically speaking) of the curve is minimal. This results in the curve only has to fit the highest few points well, and can really mess up the low concentration points and still come out looking OK.

My question is, when we fit the curves using the least squares method, when we calculate the error, should we normalize the error of each point (differences between actual Y and expected Y) by dividing the error against the expected Y? This way we are maximizing the fitness of the curve in terms of percentage of unexplained difference at each point, instead of absolute values.

Let me know what you think and if I can clarify on anything. Appreciated

More Sijie Xia's questions See All
Similar questions and discussions