22 May 2020 1 6K Report

"Analytical sensitivity is defined as the concentration equal to N standard deviations above the zero calibrator. The most usual value for N is 2. To determine the sensitivity in this way, a zero concentration calibrator or sample is assayed approximately 20 times within an assay. The mean and standard deviation of the signal are calculated and the mean signal level ±2 standard deviations interpolated as an unknown from the calibration curve. This “concentration,” which is a function of both the imprecision of the signal generated and the slope of the calibration curve, is the analytical sensitivity."

Can someone explain this with an example?

How did they construct calibration curve and what is "zero concentration calibrator" just a blank in same sample matrix without analyte?

zero concentration calibrator or sample is assayed approximately 20 times within an assay, means blank is replicated 20 times and SD was calculated?

±2 standard deviations interpolated as an unknown from the calibration curve, 2 times SD of blank is estimated concentration from the calibrator curve?

Say my SD of blank was 1 now concentration 2 is the analytical sensitivity?

More Xin Pin's questions See All
Similar questions and discussions