Hello,
I have recorded some DR-UV-Vis spectra, of powder samples. As the samples absorb throughout the vis and NIR regions, I have expanded the range to have the most complete spectrum. I have used a Perkin Elmer 1050 WB, that uses PMT detector for the UV-Vis (up to ca. 860 nm) and an InGaAs for the NIR. Now, being diffuse reflectance (DR), my energy output is inherently low and spectra use to be noisy and distorted, mainly at the detector change. To correct them but keeping precision, I decided to increase the energy input at the detectors by 2 main actions (among other minor ones):
- increase the integration time (more time spent at a wavelenght) to have a better average of the signal
- increase the detectors gains (like increasing the ISO in photography)
So I decided not to increase the slit of each signal (slit = full width at half height, broadening the input gaussian for each wavelenght, that would have resulted in possibly overlap of nearby points). The step size is 5 nm, the slits were at 2 nm.
By that I obtained very good, almost noiseless, spectra. Now, the spectrometer gives the chance to do automatically some variations in slits, but not in gains, in an experiments series. Should I deduce that optimising the sepctrum of a sample by increasing the detector gain would result in an increased error? Am I exiting the calibration of the detector and I should instead keep the gain low and broaden the signal gaussian by its slit? I say this because I see a remarkable jump at the detector change, where both detectors have their sensibility minimum, and I don't know how much accurate my measurement is.
Thank you very much to anybody.