Many thanks for your question. I learned that the LOD and LOQ derived through the calibration curve is specific for this calibration. In other words you should calculate the LOD and LOQ with the calibration curve you use to quantify the samples. If you use LOD and LOQ derived from a different calibration curve, that uses much lower concentrations than the calibration in use, those numbers are not connected to the analysis, not meaningful, and reporting such numbers could end in more than one eyebrow rising...
I agree with Detlef explanation, however if you have some time - please check if the values obtained this method are ok - by simply checking the signal/peak area for standard sample having concentration defined as LOQ. CHeck if you detect your analyte at this concentration and the RSD for three runs.
In the determination of the LOD and LOQ there are a number of things to keep in mind that haven't been mentioned so far. First, the matrix of the calibration standards can impact the response if there is any contribution of the matrix to the signal of interest. Second, as you are likely doing some statistical processing of the data. The weighting of the data points can significantly impact the data fitting. Most general least squares fitting is done with a value squared weighting of the points, therefore, the largest value response numbers will tend to significantly outweigh the smaller values. Third, the LOD and LOQ values are defined by the noise in the signal. As the noise is generally not a parameter that can be controlled, and as a result, the noise will tend to vary. In most cases, the LOD and LOQ will vary from day to day, and from instrument to instrument, and given uncontrolled matrix effects, sample set to sample set. As a result, LOD and LOQ are not distinct values, but only estimated values. As for running multiple calibration or sample response curves, it is worth doing as it will account for the variability in the method. It should also be kept in mind that if there are trends in the lowest concentration values that are not consistent with the model being used to fit the data, such as Beer's law, or an assumed linear response curve, such as one would expect from a detector that this is indicative of some other process occurring, which must be assumed to be uncontrolled. If for instance, in a GC method with a nominally linear detector has a signal fall off with low concentrations where the intercept is negative, then you must assume that this intercept is likely to vary with time.
Multiple calibration curves should not be a red flag, but rather an indication that you are actually evaluating the variability of the method (systematic errors), including the sample preparation and the impact of the data processing.
It is a real shame that this isn't being adequately addressed in the educational system.
Remember, you should be the most staunch critic of your methods, data and science.
I think John Canham gave a more complete explanation of the value of considering not only your daily calibration curve but also the measurement on other concentration ranges and the impact of the matrix influence. I also noticed his emphasis on the estimative foundations of any statistical measurement. Everybody should have these considerations in mind when measuring concentrations. We will always have a range rather than a sharp number, and even this range needs a reevaluation from time to time.
Thank you John Canham for the detailed explanation and for giving reason behind the importance of multiple calibration. Students do suffer and got confused sometimes because of lack of proper information and idea they should get.
The matrix of the calibration standards can impact the response, LOD and LOQ were defined by the noise in the signal as mentioned. I tkank all the respondent and i would like to appeciated John Canham for interesting answer
Although the attachment is a specific guidance for the estimation of LOD/LOQ in food contaminant, you can find some interesting statistical procedure and references.
Just from experience I can tell you that the LOD calculated from your curve is the least reliable method of calculating LOD. I vastly prefer a true statistical approach whereby you run a set of standards at or near the estimated LOD (we run them at the LOQ, or to be more precise at the lowest calibration concentration) and you calculate the statistically-derived LOD from those data. See 40 CFR 136 Appendix B for one approach; there are others. Sometimes this method is not in agreement with the time-tested eyeballometric analysis but you have to keep in mind that it is giving you the statistical probability that you actually detected something, not that you can see a peak.
Hopefully this will help you keep Detlef from raising all three eyebrows! LOL!!
I am not sure how rules and regulations apply throughout the world, but in my previous industrial work, we used solvent standards for calibration and matrix standards for determining LOD, LOQ, and a potential correction factor (i.e. recovery) for the individual analyte due to matrix effect (which may be concentration dependent).
So for calibration, we validated the linear response using at least seven points (sometime up to 15) covering the entire linear range. On a daily basis, we used 3-5 points for method calibration. They were prepared from certified standards in the extraction solvent directly. Then we used certified standards spiked into a representative matrix and subsequently extracted following the protocol for analytical samples. During validation of the method, it may become obvious that there are no matrix effects or significant loss during extraction, and the QC samples may thus also be prepared directly in the extraction solvent. That being said, it is always preferable to do matrix QC as this allows you to continuously monitor the process. QC samples were prepared as duplicates at low concentration (i.e. 3-5 times the desired LOD) and the LOD/LOQ and absolute uncertainty was continuously monitored to comply with with method specifications based on QC statistics. We also used duplicates at high concentration (often around 10-20% of the upper limit) to determine relative uncertainty in a similar manner. For some analyses, where there was significant analyte- and concentration-dependent matrix effects, we also used duplicate mid-range concentration QCs to monitor that matrix effects complied with the method specifications that were determined during development and validation.
In Denmark, for environmental analysis (we primarily worked with GC-FID and GC-MS), all this was done in accordance with specific law text and NORDTEST, which is a guide for determining method uncertainty. I am sure similar would aplly elsewhere in the world.