First use some sort of Dark Subtraction (helps to normalize the image), then use atmospheric correction. It is recommended that you use an image that has less than 10% cloud cover. Even the slightest cloud cover will scatter any bandwidths in the blue end of the spectra.
It depends which imagery you have (which processing level too) and what is the analysis intended. Maybe if you explain more what you intend to do, people can guide you more specifically.
For example for pixel-based change detection you should check the geometric accuracy (the RMSE should be less than a pixel preferable less than 0.5 pixel, and co-register if exceed this threshold); also perform cloud and cloud-shadow masking (for Landsat you can apply f-mask of Zhu), and an atmospheric or radiometric correction (relative normalization works well, if you do not need to infer biophysical parameters).
Although vegetation indexes are in most of the cases kind of normalized, you should consider a relative radiometric normalization to a reference image (e.g. earliest image). The algorithm RADCAL (based on IMAD) of Canty M. and Nielsen. is probably one of the best options. It is open source free downloadable, although you need to have ENVI-IDL or you can also use the Python script versions of the algorithm, but for the latter one you need to know a bit of using scripts.
here you have the links to the Canty and Nielsen paper:
I forgot to include in the pre-processing chain suggestion the topographic correction, that might be important according to the sensor viewing geometry (I use to work with Landsat, then it is not much needed for change detection based on vegetation indexes like NDVI).
Also add that the radiometric normalization of Canty et al. is fully automatic (no need of previous knowledge) and much more advance that dark object subtraction.