I want to fit the TCSPC data in origin but the fit with predefined exponential functions of origin is not very good. So please help how to fit it in origin.
Can you explain what decay process (or processes) you are measuring? If it's fluorescence decay, is it for a gaseous, liquid or solid sample? Ar you following a bimolecular process with a rise and then a decay or a simple decay? Decay processes will normally follow exponential decay, though often there is a need to fit the data with more than a single decay parameter if there is more than a single decay process involved.
If you have TCSPC data why not use appropriate software (e.g., DAS from IBH Horiba). Those are dedicated program and give excellent fitting results. Origin (with predefined functions) will fail to fit the data properly if the decay is close to prompt (instrument response function). Program which has a capability to deconvolute/reconvolute the decay from prompt should be chosen for analysis.
Good fitting is not possible with Origin because of prompt. If the lifetime is significantly larger than prompt and assuming no fast component is present you can ignore the initial part and fit the rest with Origin. The heart of DAS is the reconvolution process which lets you calculate lifetime shorter than the FWHM of prompt. According to some report by using the reconvolution technique you can measure lifetime 1/10th of the prompt. Which means if the FWHM is 500 ps it is possible to calculate lifetime of 50 ps. Of course it is a limit and would depend on the data quality, fitting program, etc but the crux of the matter is you cannot calculate such short time constants using Origin. Also it would be hard to figure out the goodness of fit from Origin and you have to manually calculate the Chi Square whereas DAS automatically calculates it.
1/10th is a rule of thumb. Theoretically, there is no limit to how low you can go. But the quality of IRF, S/N ratio, choice of software limits the lower value of the extracted lifetime. In reality, you might notice that you can barely go beyond 1/3rd.
You can export (e.g. copy and paste from Excel or Origin) the data file (t, N) that is (time, number of counts), to an excellent program called Graph (download at padowan.dk). Then you can use a user defined exponential or double exponential (N=Ae(-k1t)+Be(-k2t) where A+B=No. No is the initial number of counts at t=0. If it doesn't fit the first time, you can change the starting parameters. Of course the graphing package uses x as the independent variable so you'd have to use the "user defined" trend line in the form
$a*exp(-$bx)+$c*exp(-$dx) where the $ sign indicates variable parameters which graph can vary to achieve a regression fit. If you have a background count, you can use a vertical offset as well, i.e. $a*exp(-$bx)+$c*exp(-$dx)+$e where Graph will also vary the vertical offset to best fit the data points.
If you need to deconvolute with respect to the response function, then start with the response function itself in graph, fit that using an appropriate function (e.g.a gaussian), then add that fitted gaussian (i.e. the actual function with constants for the fitted parameters) to the user defined trend line function in Graph (as a multiplier) to obtain your new regression fit. Hope this helps.