I keep the ratio of count rate to excitation rate to be 1% in my TCSPC. I still found the measured lifetime is shorter than reported value. Is there any other factors that could cause this effect?
An incorrect instrument response function (IRF) can be another source of artificial distortion of the TCSPC decay. Besides the instrumental artefacts, there is a number of factors that may influence the actual lifetime. This depends very much on what is your fluorophore and under what conditions you measure. To name some examples, fluorescence lifetimes are frequently very sensitive to the environment. Make sure you are using the same solvent in which the reported value was measured. Also if you are using a very high concentration of the fluorophore, the actual lifetime may be shortened due to self-quenching.
Thanks for your kind response, Radek! I am sure that I was using the same solvent as the literature. And the concentration of the fluorophore (fluoroscein) is in the several micro mol/L range. Self-quenching effect should not be significant in my case. Is there any method to examine whether the IRF is distorted or not?
Absolute time calibration using fluorescence standards is not recommended because the decay time will be influenced by the levels of oxygen so you need to spend time nitrogen purging each sample which is quite tedious and the decay time will finally still depend on how well you purge the system however you should get close to the literature value after nitrogen purging.
I quite often use a series of fixed 1-63 ns delay boxes to calibrate our systems. Usually I measure a prompt to 1000 counts and plot the peak of the prompt with respect to the delay box. Once I measure across the entire time range I can then see where the TAC performs linearly and the edges of the time range where it doesn't. As the delay boxes are fixed wires they are unlikely to deviate unlike some of the old TACs which are capacitor based. Significant time calibration variances are usually only observed on the short time ranges.
Mono-exponential lifetime standards (without nitrogen purging) are still useful checks to perform as they give fitting a test (although the decay time will be smaller than reported in literature). With mono-exponential standards you can see if there are any issues with the system such as optical issues (misalignment - do a focus scan before the decay if your system has focusing optics), polarisers (required if using a laser excitation source - damaged polarisers at VM, misaligned polarisers or no polarisers will exhibit multiple decay times with a laser source and a mono-exponential standard) or RF (check residuals and autocorrelation ensure they are random and don't display a sinusoidal pattern).
We quite often use Rhodamine 6G in ethylene glycol (A~0.05) to check the polarizers are aligned correctly doing the decay at VM and then the anisotropy measurement.
POPOP is also often used to test how well the system handles a short fluorescence decay.
Dear Philip! Thanks a lot for your patient comments! The current problem with our Becker&Hickl TCSPC system is that the measured lifetime is always shorter than the lifetime measured by other systems such as Frequency-domain machine or HORIBA TCSPC system.
I don't think the Oxygen here plays an important role since we measured the same solution using these three machines. According to Peter's comment, I plan to use coumarin-153 to test our system since we have a laser around 400 nm.
I guess the problem may originate from some parameters of the hardware.
Hi, Peter! Thanks for your kind explanation! I tested my system by using freshly prepared Ludox solution. This time I got a reasonable lifetime value. It turns out the shortening of measured time resulted from the distorted IRF curve.