To my knowledge there is no real physical basis for fitting in wavelength but this doesn't seem to stop most people from doing it. If you want to fit a specific function to a luminescence peak in eV it should reflect the carrier distribution n(E) in the initial states and the density of final states N(E). If both are randomly distributed around a mean value then a Gaussian fit makes sense, but there are also cases where a different function or even an assymetric peak is reasonable. It really depends on how the electrons and holes that can recombine are distributed in energy. Similarly, fitting in wavelength would be justified if the states are distributed in a certain way in wavelength (or 1/energy), but I can't think of any cases where that would be a more sensible approach.
My recommendation would be just to interpret peak position, peak width or the area under the peak of the raw data, and only to resort to fitting if you have overlapping peaks.
Sometimes I've heard to call "the spectroscopist error" that of fitting luminescence spectra in wavelength with gaussian curves. This is because the origin of the gaussian shape of the emission arises from a rather 'gaussian' broadening of the electronic energetic levels, which drive the electron transition and simultaneous photon emission. Then, one should work in energy, instead of in wavelength (despite most cases you measure in wavelength).
I agree with Manuel, fitting spectra by normal distribution in wavelength has no physical meaning. Wavelength is indirectly proportional to energy, i.e. E=(hc)/lambda. Therefore the normal distribution of energy will look distorted in wavelength scale, see the attached plots. To properly fit your spectra you'll have to do it in eV space, then you can plot them back in lambda.
It's worth pointing out that when you switch from wavelength to eV, it is not sufficient to just convert the x-axis. The reason is that a spectrum in wavelength is a plot of dI/d(lambda ) vs lambda, and in eV it should be dI/dE vs E. This means you need to multiply the values by d(lambda)/dE.
The way I explain this to myself is to note that a spectrum is a histogram, where the bin width is the range of wavelengths d(lambda) that is focussed onto one row of pixels in a CCD (or the step size of the motorised grating when using a PMT). If only the x-axis is converted to eV, the resulting spectrum is a histogram where the bin width is not constant (say d(lambda)=0.1nm, this is a much bigger energy difference dE at 400nm than 1000nm).
Helllo, Manuel, interesting point your last comment. But I think that is only necessary in the case one is interested in the area below the curve (luminance measurements).
Please have care about possible asymmetric peaks, especially when comparing spectra recorded at different temperatures. Additionally, if your peak has a broadening due to the modified lifetime of the carriers, you might have to use a Lorentzian line shape to fit your spectra.