The word "curve" is a general term referring to any interpolation of points on a graph. Before the computer enabled advance non-linear fitting, linear algebra was the state-of-art for curve fitting. Therefore the axis were always set (for instance, calculating the log of the values etc.) in order to receive a linear calibration curve and thereafter fit the straight line equation.
The word "curve" is a general term referring to any interpolation of points on a graph. Before the computer enabled advance non-linear fitting, linear algebra was the state-of-art for curve fitting. Therefore the axis were always set (for instance, calculating the log of the values etc.) in order to receive a linear calibration curve and thereafter fit the straight line equation.
When you calibrate an instrument, you normally only calibrate your instrument over a range where the response is linear. Over a large enough range, all responses are non-linear. Sometimes you need to use a Log transform or create a quadratic model.
Most chemists are afraid to use a non-linear calibration curve. I don't know why. From a practical and statistical point of view, requiring a linear calibration curve is silly!
On a GC-ECD I used to use, the company wanted us to calibrate PCBs from 0.05 to 1.5ppb. This was the "linear" range of the instrument. I found that I could calibrate all the way up to 25ppb and get excellent results. I had to use a quadratic calibration curve though. As part of a project in my Regression Analysis class, I compared the prediction ability of linear and quadratic models with and without intercepts. The quadratic model with an intercept was the best in every way possible! My employer didn't use quadratic cal curves because, " You just don't." The fact is was superior didn't matter.
You seem to comment more than to actually ask. :-)
Let me comment in your text.
"When you calibrate an instrument, you normally only calibrate your instrument over a range where the response is linear."
There is no scientific reason, it is only by convention.
However, the convention depends on the field of science and many other factors.
A linear "curve" may hide a non-linear relation e.g. a linear decibels (db) "curve".
" Over a large enough range, all responses are non-linear."
This seems trivial but it may just be an assumption that requires proof. I would agree with you to say many, but it is always dangerous to say "all" in a scientific discussion. :-)
"Sometimes you need to use a Log transform or create a quadratic model. "
Could this have something to do with the underlying mechanisms?
"Most chemists are afraid to use a non-linear calibration curve. "
Hmm I have no statistical proof for my claim to disagree, but my personal experience differs.
But then I do know very few chemists that are particularly afraid of anything.
" From a practical and statistical point of view, requiring a linear calibration curve is silly! "
It is silly to use statistics for determination what kind of curve is applicable.
A better approach would be to base the decision on the mechanisms of the parameter to be measured and the mechanism your instrument uses to get the measurements.
" This was the "linear" range of the instrument. I found that I could calibrate all the way up to 25ppb and get excellent results. I had to use a quadratic calibration curve though. As part of a project in my Regression Analysis class, I compared the prediction ability of linear and quadratic models with and without intercepts. The quadratic model with an intercept was the best in every way possible! My employer didn't use quadratic cal curves because, " You just don't." The fact is was superior didn't matter."
In industry it may be the case that, if your boss wants pink flamingos, give him pink flamingos.
In free research you are allowed and encouraged to protest if there were no pink flamingos visible.
If you read through some of the older versions of Principles of Instrumental Analysis by Douglas Skoog, he stated, (and I am paraphrasing) in no uncertain terms that "All calibration curves run through the origin and are linear." This is BS.
Chemical analysis instruments are "linear" over a small portion of their detectors range. In fact, this is only a linear approximation. Often, a bad approximation. Every single instrument have worked with, (granted it is only a couple dozen and many were old) had very non-linear responses, even in the "linear" range.
Even if you shine a laser, it will not go straight forever. It will change direction, unless you have an infinitely long vacuum, devoid of all matter and the influence of gravity. If we are going to use make believe land our basis for theory and mathematical "proof", then we can make dragons too. And I want one;-)
How do you tell what type of model to use for your data? Do you use a log transform or add a quadratic term? Both? You use statistics. If you read up on simple linear regression analysis, (This is linear in the mathematical sense not the common usage.) you will see that there are rules and assumptions for creating a regression model.
1) Samples have to be independent of each other. (Most of the instruments I have used showed either a raising or falling response as I run more samples in a day. So, most of the instruments I have fail on this assumption alone. But, I can live with it.)
2) Residuals of the model should be (approximately) normally distributed. (If you fit a linear model for a non-linear or quadratic response, you will violate this assumption.)
When we create a calibration curve, deciding, without using the data, that a certain model MUST be used is silly. There might be a theoretical reason why you chose one model over another. If you have 3 models, Log(y) = f(x) vs y =f(x, x^2) vs y =f(x) and all the models are equally good and one particular model is supported by theory and the others are not, I would have no problem choosing the theoretically correct model. Deciding that the Log(y) =f(x) model is the ONLY viable model, because theory says so, yet the data do not support this model, then the theory is wrong. One way of determining if one model is better than the other is to use statistics. When you create a cal curve, how do you know if it is a good one? Statistics. What is a calibration curve? Some type of a regression model. Where do you find regression models? Statistics.
In industry, if you blindly follow along with what others have said, without regard to what is right or true, just because the boss says, "These ignition switches are good. Use them!" does NOT make it right. Look at GM and their current crop of recalls or Toyota and their recalls, or any other company and their crops of recalls. Obedience without question => failure.
A linear calibration curve is a special case (and rare, as Farooq points out) of the generalized calibration Conc = f(x), where (x) is the instrument response. Normally instrument responses are only linear over a very short range of concentration. While many texts insinuate that you "should" have a linear calibration curve, this is really a throwback to having to plot the data manually. Given computing capabilities present on even the most rudimentary instrument controller there is no reason to either attempt to construct a linear normal-space calibration nor to limit yourself to a linear normal space calibration. You should adequately model the instrumental response over the concentration range within which you are working (lots of standards), and then fit the data to the best statistical model that you can construct.
When we plot the curve between instrument response ve concentration that revealed the straight line, indicates your instrument working proper and response is directly propotional to concentration. GEnerally calibration purpose is to check the instrument working conditions. thats why calibration curve is very useful.
I have read all the very interesting comments and observations of the readers based on their varied experiences.
I would like to add further.In calibration method , to establish the calibration graph using pure diluted working standard solutions it is assumed that there is no ‘matrix effects’ i.e., reduction or enhancement of signal by other matrix component accompanying the analyte of interest in the sample taken for analysis.
Calibration method is generally applied in spectrophotometric / fluorimetric determinations after the isolation of analyte of interest from the accompanying matrix under the conditions by using selective reagents, adjusting pH
conditions, use of masking agents, separations (if any) using solvent extraction, etc. The system should be free from matrix effects. Moreover, calibration method is based on two assumptions: 1. The errors in the calibration experiment occur only in the instrumental response plotted on vertical Y-axis.2.The magnitude of errors in vertical Y–axis values is independent of the analyte concentration.
Both these assumptions have their limitations. In case of first assumption, the stock solution of standards of analyte of interest can be made up with an error of ca. 0.1% or better. While errors associated with the working standard solutions used for plotting of calibration graph may vary appreciably. The errors in the working standard solutions (diluted solutions) will depend on the type of glass wares (pipettes, volumetric flasks) used for dilutions, acidity, concentration levels and on storing /aging etc. The second assumption is often unlikely to be true because the relative errors in measurement are constant while the absolute errors will increase as the analyte concentration increases. The total errors in the determination of analyte mass thus depends on the precision of the method as well as on the number of steps involved. Direct methods having high precision and involving minimum steps are more desirable to obtain the reliable analytical data. Veselsky etal , 1988 has applied laser fluorimetry for the determination of uranium in minerals after solvent extractive separation of uranium from matrix.(Veselsky, J.C., Kwiecinska, B., Wehrstein, E. and Suschny, O (1988), Analyst, v.,113,pp.451-455.).
There are three methods of measurement in analytical chemistry, namely (i) calibration method, (ii) standard addition method and (iii) differential technique. Calibration method is the most commonly used for testing the linearity of the response of the instruments in the concentration range of the analyte adhering to Beer’s law. In the Standard addition (Internal standards) method, the addition of known amount of analyte to a sample and the observation of the response from that standard provides an easy method of accounting for any interferences in the sample. Standard additions method is generally recommended to test the validity of a new analytical method and also to account for any matrix effect. Differential technique (DT) is based on the comparison of the signal response of the accurately known standards with a sample of similar but unknown concentration on same sample weight or dilution basis. Differential technique using reference standards guarantees the quality of an analytical result (accuracy, high precision, reliability, comparability & traceability). It is a self-standardized and an absolute methodology of measurement .
All these methods have significant bearing on the reliability, cost, traceability of measurement data. The applications of all these methods hold good when a linear relationship between signal response and concentration of analyte is established. Truly, if proper care is not taken, results obtained will depend on the vagaries of the analyst.
Article Trends in the Methods of Measurement in Analytical Chemistry
Jorge, I noticed your question was not listed under the topic "calibration curves," where there are some related questions. Perhaps this was an oversight? I found at least one interesting answer that you had written to a question under that topic, more recently, I think.
It would probably be of general interest to those reading these posts to also look under the topic "calibration curve," and also maybe "calibration," FYI.
I started writing a reply, but the question turned out to be complicated as well as interesting and so far it runs to 20 pages. Some formulae and figures should be added, but I'm posting it now in order to stay topical. The article could be thought of as a string of blogs on topics that have been raised over the years. Proposals for improvements, corrections and additions are welcome.
I hope the explanations are suitable for the many people who use analytical techniques incidentally to their main occupation, and also for interested statisticians who aren't familiar with analytical chemistry. Thanks are due to statistician Jim Knaub, who helped greatly with putting the ideas together.
Here's the ResearchGate link and abstract:
DOI: 10.13140/2.1.5049.5680
A ResearchGate question by Jorge Varejao: Why is a straight line calibration line often referred to as a calibration curve? This makes no sense. It is indeed possible to make calibration curves.
ABSTRACT: I propose that calibration curves known to be reliably linear are important because they obviate the need to prepare a curve each time a method is applied. If the straight line passes through the origin, a single calibration point is sufficient. If there is a significant intercept, it may be possible to estimate it with a blank measurement or apply a correction. Single point calibration saves time, uses less reference substance and facilitates drift detection and correction.
Methods that are most reliable in the long term, and require minimal verification each time they are applied, tend to have responses that could be calculated directly from fundamental principles and physical constants, at least in theory. Generally, in such cases the response is linear or can be linearised by a mathematical analytical function. In practice, we usually calibrate using a reference substance which has to be suitably characterised, but this does not detract from the advantage of reliability.
It's important to distinguish between the linearity of the analytical signal (such as light absorption or emission) and that of the measuring instrument (such as a spectrometer). Instruments with linear responses are easier to calibrate, verify at the moment of use, and be interchanged. Sometimes, for example with electronic balances, one has to rely on the manufacturer for verifying the linearity.
Calibration involves various statistical difficulties and pitfalls, even when the response is sufficiently linear for the analyst's purpose. The use of straight-line approximations when there is slight non linearity, for example with spectrophotometry, should be more adequately addressed. When the response function is non linear and arbitrary, the difficulties are increased, and a large number of standards may be required.
Conventional curve fitting methods do not provide means of estimating the uncertainty of the analytical result. An algorithm proposed by Tellinghuisen (2000, 2005), straightforward to implement, may provide a convenient solution that is applicable whatever the degree on non linearity. For each measurement, the result for the unknown is defined as an additional adjustable parameter in the least squares fit of the calibration curve, an approach equivalent to inverse regression analysis, which is only available for straight line fits.
Finally, I propose that many of the difficulties and statistical uncertainties could be reduced or eliminated by using modern instrument technology to allow one to calibrate over a much narrower working range. Depending on the situation, we could either make up solutions at concentrations closer to nominal, or iteratively adjust the injected amount of standard or unknown. Among the advantages would be simplification of method validation protocols, which presently constitute a barrier to "in-use" method development.