There are some routine corrections that should be carried out.
The first one is division by the lamp profile. Essentially a small fraction of the light source gets redirected to a photodiode just before the sample. The current across the photodiode is measured and used to measure the lamp output. This correction accounts for any fluctuations in the lamp output and accounts for the non-uniform illumination across wavelength. For instance Xenon lamp output is typically much weaker in the UV. This also means if you measure a spectrum now and in 5 months when the lamp has significantly aged they should be comparable. Call the Signal S and the Lamp R.
The second correction is the dark offset. Usually the slits to the monochromators are shut and the dark counts of the detectors (the R PhotoDiode and S PMT) are recorded. If a 1 s integration time is used then the dark offset is usually measured for 10 s to get a good average. This average is then subtracted from S and R.
Next are wavelength correction factors for the excitation and emission. These are required because both monochromators have a grating response (this is also polarization dependent, Wood's anomaly's can clearly be exposed by rotating a polarizer). See here for details (530 22 is commonly used for an excitation monochromator and 530 24 is commonly used for an emission monochromator):
Bare in mind Hamamatsu deliberately plot this in a Log scale so it appears substantially better than it is. When its plotted in Linear you can see how hopeless the PMT is at the likes of 800-850 nm... and 850-900 nm is pretty much unusable.
The drop in efficiency in the red explains why your measured spectrum are significantly blueshifted and the correction factors appear to redshift your spectrum.
So now you have:
Sc=(Signal-Signal Dark Offset)×Emission Wavelength Correction Factor
Rc=(Reference-Reference Dark Offset)×Excitation Wavelength Correction Factor
Recommended signals to measure are:
S - Always ensure you are working in the linear regime of your detector. e.g. the R928 PMT (~2×10^6 CPS).
R - Check your lamp output is steady, it should look roughly flat for an emission scan. For an excitation scan, it'll show the lamp profile.
S/R - Uncorrected excitation/emission spectrum.
Sc - Influence of the correction factor on S (you will see if noise is substantially amplified).
Rc - influence of the correction factor on R (for emission this will be a signal number, its more important for an excitation scan or Excitation Emission Matrix).
Sc/Rc - The corrected Signal.
Care must always be taken with correction factors. A weak signal at 500 nm measured with a low integration time e.g. 0.1 s may yield a decent looking uncorrected S spectrum. However if measured to 850 nm with a R928 the correction factor in this regime may be over 200 so the noise will be substantially amplified making the corrected spectrum Sc/Rc look extremely noisy.
Sometimes an additional blank subtraction is taken of solvent in the cuvette (measured using the same conditions).
For relative differences of a set of samples measured in the same instrument uncorrected spectrum are okay. You may only be looking for an increase/decrease in intensity.
On the other hand if peak fitting and further analysis is done without correction factors... you may prescribe a system to having 2 individual peaks and be fitting the likes of a single peak right at the Wood's anomaly. Its surprisingly done quite often in published papers.
I would presume you are looking for an answer about why different materials need different spectrum (excitation wavelength) on phosphors?
If that answer is yes;
A luminescent material (in this case, phosphor) contain its own (small portion) of activator from rare earth element such as Eu3+. Eu2+. Ce3+ etc.. Different types of phosphor requires different excitation wavelength to excite the materials. You may not see emission spectrum at all if you excite a material at its wrong wavelength. Every phosphor has their own specific wavelength that can be used to emit light efficiently. So choosing the correct wavelength is important on each different phosphor material.
Thanks for the answer. I've another question. In general, in terms of publication (not involving a theoretical study, as you said) uses spectra not corrected?
Farooq, thanks for the clarification. The equipment I use has automatic correction. I also notice is that usually the corrected spectrum shows more noise and baseline increases greatly in the near IR.
Can you specify what you mean with CORRECTED? corrected for what? detector sensitivity as a function of wavelength? intrinsic emission strength as a function of wavenumber?
An issue of importance is the matching factor of the light emitted by a phosphor-scintillator after irradiation and a certain "photo-receptor", for example photo-sensing devices, CCDs etc. In the aforementioned sense, the term "corrected spectrum" may be interpreted.
In principle, that's my question. I use a QM-40 (PTI) spectrofluorimeter and it's possible to enable a ''real-time correction'' option which gives a spectrum corrected. I've read a bit about it and Farooq seems consistent:
''When a fluorescence spectrum is corrected, it is corrected for intensities not wavelengths. This correction takes into account that the output of the xenon lamp is not uniform through out the entire wavelength range and gratings change their efficiency.''
I've just seen the specifications of the QM-40 spectrometer. The xenon lamp is used to produce light over a spectrum between 185 nm and 680 nm (or 900 nm). The monochromators are used to select narrow bands of wavelengths needed, I believe for exciting the sample. Then the excitation light is recorded.
Taking into account that (a) the xenon lamp does not exhibit uniform intensities in the whole spectrum range, (b) this non uniform light is 'analysed" in the monochromators and (c) it is used for excitation, the intensity correction should be set to ON. This is because different intensities produce different excitations and, of course, differentiation in the fluorescence light emitted.
In your place, I would perform measurements both with the correction ton ON and to OFF to address the real measurement differences. These differences are indicators of the bias of your system to the actual situation, namely fluorescence light emitted by a sample .
I now understand that the question concerns the specific instrumentation with lamp/monochromator set-up. I thought it was an issue about fluorescence spectra in general
As for emission spectra, correcting them for the response and the dispersion of the detection system is mandatory if you want to do any analysis of the shape, and/or if you want to compare them precisely with spectra measured on other instruments (e.g. literature spectra which have been corrected).
As for fluorescence excitation spectra, correcting them for the spectrum of the source (if you used a lamp) or for laser intensity is necessary to compare them with the absorption spectrum.
Most published papers report uncorrected spectra unless differently stated.
Agree with all. The manual of the specrometer will generally have the (i) quantum efficiency of the detector as a function of the wavelength (generally goes down with wavelength; for example S20 response) as well as (ii) diffraction efficieny of the grating blazed at a wavelength (or prism in old systems). These are the factors one uses for correcting. At times, the corrected spectra at longer wavelengths can be surprisingly different than those just recorded. The dust and humidity collected on optics in long years of the machines can also have different response than recorded (UV gets more weaker) which can be corrected by taking reference samples.
When you correct an emission spectrum, the correction takes into account the grating efficiency and the detector efficiency. It is important to use it, as without a correction, there might be a maximum, which is an artifact due to the fact that the detector sensitivity was high in this spectral region. Likewise, one might observe a low intensity region without a correction, which is only so due to the low sensitivity of the grating or the detector. In case of the excitation spectra, it is similar, as there are or there could be gratings used. Additionally, there is the light source, which has its characteristics. It can be a Xe lamp, a D2 lamp or synchrotron radiation to name just a few different light sources. Without a correction, what one sees is a spectral feature which is characteristic of the light source. Different light sources have different corrections. For example for the VUV range, one uses Na-Sal spectrum as it has a constant light output throughout the region of interest.
As suggested above, one must distinguish between corrected excitation and corrected emission spectra. Corrected emission spectra do not usually differ considerably from uncorrected emission spectra given the red-sensitivity of most modern detectors (PMTs). The main correction - in emission spectra - for grating effects is for Wood's anomalies. In excitation spectra, on the other hand, the difference between corrected and uncorrected spectra can be considerable due to the enormous change in output intensity of the xenon arc source (by far the most common light source in spectrofluorimeters). Personally I do not like to use automatic corrections since I prefer to see the raw data - then when I apply correction factors I can judge how much they alter the spectra. These instrumentation considerations are discussed in my recent book, Introduction to Fluorescence.
I will add to what people said that with concentrated solution, maximum of emission can shifted because of reabsorption.
It exist a kit approved by IUPAC called BAM Spectral Fluorescence Standards that enable the measurement of fluorescence spectrum using a standard. The idea is to get rid of all source of error. But also the standard can be used to check the validity of your correction function of the spectrometer.
Sophie is absolutely correct. The new BAM standards provide a useful and facile way to obtain correction factors - instead of relying on the correction factors provided by the instrument manufacturer. In my graduate student days I had to obtain a standard lamp calibrated by the National Bureau of Standards and it was a huge effort to obtain corrected spectra.
There are some routine corrections that should be carried out.
The first one is division by the lamp profile. Essentially a small fraction of the light source gets redirected to a photodiode just before the sample. The current across the photodiode is measured and used to measure the lamp output. This correction accounts for any fluctuations in the lamp output and accounts for the non-uniform illumination across wavelength. For instance Xenon lamp output is typically much weaker in the UV. This also means if you measure a spectrum now and in 5 months when the lamp has significantly aged they should be comparable. Call the Signal S and the Lamp R.
The second correction is the dark offset. Usually the slits to the monochromators are shut and the dark counts of the detectors (the R PhotoDiode and S PMT) are recorded. If a 1 s integration time is used then the dark offset is usually measured for 10 s to get a good average. This average is then subtracted from S and R.
Next are wavelength correction factors for the excitation and emission. These are required because both monochromators have a grating response (this is also polarization dependent, Wood's anomaly's can clearly be exposed by rotating a polarizer). See here for details (530 22 is commonly used for an excitation monochromator and 530 24 is commonly used for an emission monochromator):
Bare in mind Hamamatsu deliberately plot this in a Log scale so it appears substantially better than it is. When its plotted in Linear you can see how hopeless the PMT is at the likes of 800-850 nm... and 850-900 nm is pretty much unusable.
The drop in efficiency in the red explains why your measured spectrum are significantly blueshifted and the correction factors appear to redshift your spectrum.
So now you have:
Sc=(Signal-Signal Dark Offset)×Emission Wavelength Correction Factor
Rc=(Reference-Reference Dark Offset)×Excitation Wavelength Correction Factor
Recommended signals to measure are:
S - Always ensure you are working in the linear regime of your detector. e.g. the R928 PMT (~2×10^6 CPS).
R - Check your lamp output is steady, it should look roughly flat for an emission scan. For an excitation scan, it'll show the lamp profile.
S/R - Uncorrected excitation/emission spectrum.
Sc - Influence of the correction factor on S (you will see if noise is substantially amplified).
Rc - influence of the correction factor on R (for emission this will be a signal number, its more important for an excitation scan or Excitation Emission Matrix).
Sc/Rc - The corrected Signal.
Care must always be taken with correction factors. A weak signal at 500 nm measured with a low integration time e.g. 0.1 s may yield a decent looking uncorrected S spectrum. However if measured to 850 nm with a R928 the correction factor in this regime may be over 200 so the noise will be substantially amplified making the corrected spectrum Sc/Rc look extremely noisy.
Sometimes an additional blank subtraction is taken of solvent in the cuvette (measured using the same conditions).
For relative differences of a set of samples measured in the same instrument uncorrected spectrum are okay. You may only be looking for an increase/decrease in intensity.
On the other hand if peak fitting and further analysis is done without correction factors... you may prescribe a system to having 2 individual peaks and be fitting the likes of a single peak right at the Wood's anomaly. Its surprisingly done quite often in published papers.