In many trace-elemental analysis methods, sometimes sample preparation procedure involves a step where the sample has to be dissolved (or diluted) in some laboratory reagents (or in water). The presence of any trace-elements (under investigation) in laboratory reagents may lead to an overestimation in studied samples, especially where they are also present in significantly higher concentration in reagents used for the above purpose. Hence, in general practice of trace-elemental analysis, a reagent blank is also prepared as an additional sample using all the reagents used in various steps of sample preparation procedure, and concurrently analyzed, following similar method as that of the other studied samples. The reported concentrations of trace-elements in studied samples are used to be calculated considering their abundances in the blank reagent sample. This way one can avoid the overestimation of trace-elements in the studies samples, which were enhanced due to their presence in various reagents used in the sample preparation procedure.
I agree with Bimal. Trace analysis can be very tricky however. The signal observed in the reagent blank should be relatively small though and consistant compared to the sample signal especially for critical measurements. I would prepare fresh reagents from scratch to be sure that they are free of contamination. If it continues to show up be sure to at least run multiple reagent blanks to show statistical consistancy of the response. Of course it would also be a good idea to try to figure out what is actually causing the signal. Perhaps the detector can be adjusted to reduce or eliminate the interference.
I dont have an idea about recent text books for blank signals. During our lab works i fixed this logic: some chemicals were not stable quite for long time usage. as it may change the purity while contact with moisture. Simple ex is NaOH keep in out some times change their state/ NaCO3 and HCl or H2SO4 change it molarity with absorbing moiture. Hence we need to standerdized with fresh one so called blank.
In spectrascopy suppose we need to study of CO2 and H2O analysis in FTIR, the stretching freequency diminshed out with atmosphere interference. Hence we recommended blank to deduce such error. Some solvent effect also we can minimized by usage of blank signal.
In many trace-elemental analysis methods, sometimes sample preparation procedure involves a step where the sample has to be dissolved (or diluted) in some laboratory reagents (or in water). The presence of any trace-elements (under investigation) in laboratory reagents may lead to an overestimation in studied samples, especially where they are also present in significantly higher concentration in reagents used for the above purpose. Hence, in general practice of trace-elemental analysis, a reagent blank is also prepared as an additional sample using all the reagents used in various steps of sample preparation procedure, and concurrently analyzed, following similar method as that of the other studied samples. The reported concentrations of trace-elements in studied samples are used to be calculated considering their abundances in the blank reagent sample. This way one can avoid the overestimation of trace-elements in the studies samples, which were enhanced due to their presence in various reagents used in the sample preparation procedure.
Standard additions method should help especially in AAS and ICP analysis. Important in this case the purity of the water and the standards you are using
Thanks for all comments so far; I was not focused on a special analytical method since I noticed the problem in many methods (electro-analytical, spectroscopic, a.o.) when I was still performing the trace analysis myself (50 years ago as a laboratory technician). During my Ph.D. work I encountered situations where the sample signal was less than the blank signal! Thus, I performed a standard addition to the sample and the reagent blank and discovered the reason, see att. Publication). As long as I corrected for the impurities in my chemicals via concentration units (e.g. with the isotope dilution method in mass spectrometry the other way – subtracting two signals to get a net-signal - it is not possible) I got the best results in inter-laboratory comparison tests. Only, when I subtracted the signal of the blank from the sample signals (as recommended by many SOP’s) I got the trouble with negative results, due to the sample matrix which influenced the sample signal only and not that of the blank (without the sample matrix).
In the “Guidelines for data acquisition and data quality evaluation in environmental chemistry” (Anal. Chem., 1980, 52 (14), pp 2242–2249, DOI: 10.1021/ac50064a004) the subtraction of signals rather than concentrations is recommended! Also in many EPA methods, e.g.: “Method 7196A: 7.2.2: cited: Develop the color of the standards as for the samples. Transfer a suitable portion of each colored solution to a 1-cm absorption cell and measure the absorbance at 540 nm. As reference, use reagent water. Correct the absorbance readings of the standards by subtracting the absorbance of a reagent blank carried through the method. Construct a calibration curve.”
And the instruments manufacturer (e.g. Agilent) writes in the manual: “To zero a spectrometer properly, two steps must be carried out: 1. Set the instrument zero. This is the reference point against which all other analytical signals will be measured. This is properly performed under the instrument’s normal operating conditions. In the case of flame operation, for example, the instrument must be allowed to reach thermal equilibrium with the pure solvent (usually water) being aspirated as a "rinse" before being zeroed. 2. Set the analytical zero using an analytical blank solution.” This corresponds also to a subtraction of signals!
Since it looks like everyone is doing a simple signal subtraction of the blanks and no standard addition test is suggested to look if the calibration curve has the same slope (sensitivity) I find myself isolated. Are I am wrong or?
Absolutely right Mr. Karl. I agree as that you state of reasons for set zero measurements. In Uv analysis the source of light which interact on a sample usually sample (1 cm) path length calculate the no. of molecule excitation per unit area. From this unit area not only the molecule absorbed the light energy, some lose of energy through exchange with the solvent effect. This loss energy may minimize by double slit movable mirror absolute calculation of solvent interference. You also discussed color solution correction with blank absolutely right. The reason lumberst law is proportional to the thickness of the solution (color) not only the concentration, while using blank indicates absolute thickness of molecule (Color) not for solvent effect.
I also agree with your second argument about two reasons for blank set zero. In flame analysis thermal excitation of particular elements affect foreign ion interference at different level fluorescence relaxation. This can be minimized through as you said rinse the nebulizers which also set zero for analyte solution. I feel some more reasons may still we unknown for blank set zero. Anyway thanks for your sharing.
Hi, there are significant assuptions being made here, the important steps in establishing a method blank are;
1. Initial calibration of the instrument with a certified and traceable pure compound in a contamination free matched matrix standard the same as the prepared samples of interest. The calibration should contain a zero standard and at least two more standards than your regression fit ie first order requires a zero standard and at least 3 stds, a 2nd order regression a zero std and 4 other stds, 3rd order regression a zero and at least 5 other stds are needed.
When establishing the instrument method initially close examination of the nature of the regression should be undertaken with an extended standard suite maybe 10 to 15 stds the standards should halve in concentration accross the entire range ie 10 5 2.5 1.25 etc this will load the calibration in the lower part of the calibration where if a blank effect is being questioned in your preparation, a tight instrument calibration can be obtained.
These standards are not prepared according to the sample preparation but are rather a series of dilutions of primary certified pure reagents in trace clean matrix and using trace free equipment.
I would also ensure that one of your concentration standards is no more than twice the Limit of Reporting of the method, where possible a standard at the limit or less than the limit should proof your reporting limit performance of the instrument. This should be done at the concentration needing to be acheived in solution ie when sample weights and dilutions are taken into account in the final prepared solution.
A calculated recovery of your standards using your regression should proof the performance of the regression at the zero standard mark and at your first standard. Given that most regressions will not provide equal weighting for all stds, higher conc stds generally are more highly weighted unless this is specifically selected for in your regression calculation.
Now you have a calibrated instrument the repeatability of this calibration should be established to ensure that any instrument based bias is continuous such as optical efficiency which may determine the regression used or manageable such as electronic or physical drift as equipment expands or temperature or flows fluctuate.
Now you are ready to investigate preparation effects such as an elevated blank caused by a specific reagent or process used in the sample preparation. An example may be the consistancy of boron in digests in leached borosilicate glassware. The standards are prepared in plastic certified vol flasks with pure reagents. To consider subtracting the blank, first you must establish if the contamination is specific and repeatable and then this must be monitored in every preparation to the point that you can demonstrate that the result is not a spot contamination but consistant and reproducable at the performance levels of the method.
The blank should be calculated as a reading in the solution that is read and this intensity/absorbance reading should be subtracted from all other raw intensities/aborbance readings in the samples that have been prepared under identical conditions. The item/reagent providing contamination must be fixed in its input using controlling conditions such as time/temperature, surface exposed/leached, volume used etc
When a blank subtraction is performed method uncertainty will increase over a pure single reading result due to the combined use of two readings to give one final intensity/reading.
Calibration should then be performed using the primary/secondary standards and then the prepared samples with sample blanks. Instrument calibration standards should be run at suficient frequency to ensure that no substantial change in calibration characteristics has occured accross the batch being analysed. Certainly a set of instrument calibrations should be run at the start and as the final samples run on the instrument.
Zeroing on the sample preparation blank does not effectively allow tracking of blank performance and there is some risk if a negative value is created for standards that the regression formula will not adequately cope with the negative reading in the production of the calibration formula and the line of best fit. I do not like this concept, better to have your dynamic range defined within the positive output from the instrument. There are many ways that instruments can zero some more rugged than others some care is needed here and when in doubt or if no deep knowledge is available play it safe and work within the range of your primary instrument calibration standards with the standard zero as the the baseline.
Tools you can also use to indicate preparation effects, matrix or reagent effects etc - vary the weight of sample used, the final concentration in the sample should not alter if the method is performing well there should be a reasonable ability to cope with weight veriation so long as you dont exhaust a critical reagent, do a series of low level spike recoveries, prepare your standard set through your sample preparation system and note intercept and slope varience from your primary calibration set. Do a serial std addition to a preparation blank or a low level sample again noting performance.
Interact in a low level proficiency study, do the analysis by separate methods/instruments and compare results, use a low level certified referance material, submit some samples to another laboratory and note if there are differences.
Talk to NATA they are there to help and will provide options, talk to the instrument manufacturer they will have technical assistance and can probably recomend other labs where this analysis is occuring who you can ask.
Be assured you are not alone and your problems will have been tackled elsewhere.
Document your results and be methodical in your investigations.
for your extensive comments and suggestions. I worked for 10 years for a leading instrument manufacturer but I have doubts that they want to discuss their suggested methodology, e.g. in photometry, to place the reagent blank (with the analyte-selective dye and all other used chemicals) in a cuvette in the reference channel (double beam instrument) or to zero electronically a mono-beam instrument with this reagent blank. In both cases it is assumed, that a standard addition to each sample and the blank will result in an identical slope of the calibration plot. Only then it is allowed – in my opinion – to subtract the blank signal from the sample signals. But according to the manufacturer’s User Manual this is automatically (electronically) done in both cases above. What I am missing is a warning or at least information on the important assumption, that any matrix effect is definitely absent! In trace analysis of real samples this is seldom the case.
I totally agree with your statement “When a blank subtraction is performed method uncertainty will increase over a pure single reading result due to the combined use of two readings to give one final intensity/reading.” The variability of the reagent blanks is lost, if only one blank is used in the optical reference channel or used to zero the instruments.
In the meantime the sensitivities of many analytical methods and instruments have increased so much, that in trace analysis the standard deviation of the blanks, calculated in concentration units, determines the detection limit and no longer the instrument’s noise or background fluctuations. In ultra-trace analysis it is difficult to produce a matrix adapted reagent blank, since even the best supra-pure chemicals may introduce your analyte and may increase the detection limit even more than the reagents just needed for the analysis. In many cases the sample matrix varies and is not known (e.g. disposal sites). Working for an accreditation body I encountered many inter-laboratory proficiency tests with disastrous results (often > 10 % outliers), despite excellence reproducibility of the individual lab. With the exception of isotope dilution mass spectrometry none of the other instrumental techniques could be identified as being superior. With this in mind I asked the question if perhaps a wrong blank correction could be the reason for such large systematic errors (bias). Is maybe the bias built into the corresponding analytical instrumental technique which always needs a zeroing?
yes you are right; by the standard addition technique you will notice a matrix effect; but it is more elaborate to add the standards to your sample solutions and the chemical blank solution to see if the graphical evalution shows an identical slope of the Calibration curve. I dream about analytical instruments which could perform this crucial test automatically (e. g. by adding µl of analyte standards to sample and blank cuvettes in case of photometry). The remaining risk is: How are the slopes of the calibration curves below your measured signals. As mentioned above to make sure to be in the linear range is to dilute your samples (if possible) and do the same. If both results aggree it will be OK. The ultimate control of any bias would -of course - be to use a totally different analytical method for validation. But this is normally only performed if you have time and money. With over 50 years experience I am sure, you will be surprised about the state of the art...Don't blame yourself! Most methods I found in literature did not function when I tried to use them. The dammned matrix effect always worked against me!
First acidify the sample and subject it for nitric acid digestion (on hot plate) since it is acceptable matrix in flame AAS by adopting the standard procedure (APHA 2005).