04 November 2018 6 8K Report

Hi there,

I need a bit help in statistics.

I developed an analytical method, which relies on calibration curves. For calibration, the measured signal value of a calibrator is plotted against their corresponding concentration and a sigmoidal curve is fitted to this data. (See picture attached).

To show, that the calibration works reliable we use the parameter R² (coefficient of correlation) , besides some other quality cirteria that must be full-filed. For my method, it is published, that the R² should be in the range between 0.990 an 1.000 (which is the maximum anyway)

I have made 16 calibration curves and now I want to give a value what the typical R² is and what distribution of R² can be expected. Can I simply calculate the Mean and standard deviation of the 16 R² values?

For example: I have 16 R² Values, spanning from 0.990 to 1.000 with a mean of 0.998 and a Std. of 0.003 But: 0.998 + 0.003 is bigger than 1.000 so I cannot write 0.998 ± 0.003 ?

What kind of mean, median value etc should I use in this case and what kind of variance descriptor?

Thanks!

Peter

More Peter Carl's questions See All
Similar questions and discussions