Here some explanantion on the steps to follow in pansharpening.
Pansharpening is the process of merging high-resolution panchromatic and lower resolution multispectral imagery to create a single high-resolution color image. Google Maps and nearly every map creating company use this technique to increase image quality. Pansharpening produces a high-resolution color image from three, four or more low-resolution multispectral satellite bands plus a corresponding high-resolution panchromatic band:
Low-res color bands + High-res grayscale band = Hi-res color image
The band combinations are commonly bundled in satellite data sets, for example Landsat 7 includes six 30 m resolution multispectral bands, a 60 m thermal infrared band plus a 15 m resolution panchromatic band. SPOT, GeoEye and DigitalGlobe commercial data packages also commonly include both lower-resolution multispectral bands and a single panchromatic band. One of the principal reasons for configuring satellite sensors this way is to keep satellite weight, cost, bandwidth and complexity down. Pan sharpening uses spatial information in the high-resolution grayscale band and color information in the multispectral bands to create a high-resolution color image, essentially increasing the resolution of the color information in the data set to match that of the panchromatic band.
One common class of algorithms for pansharpening is called “component substitution,” which usually involves the following steps:
Up-sampling: the color bands are up-sampled to the same resolution as the panchromatic band;
Alignment: the up-sampled color bands and the panchromatic band are aligned to reduce artifacts due to mis-registration (generally, when the data comes from the same sensor, this step is usually not necessary);
Forward transform: the up-sampled color bands are transformed to an alternate color space (where intensity is orthogonal to the color information);
Intensity matching: the intensity of the color bands is matched to the pan band intensity in the transformed space;
Component substitution: the pan band is then directly substituted for the transformed intensity component;
Reverse transform: the reverse transformation is performed using the substituted intensity component to transform back to the original color space.
Common color-space transformation used for pan sharpening are HSI (Hue - Saturation - Intensity), and YCbCr. The same steps can also be performed using wavelet decomposition or Principal Component Analysis by replacing the first component with the pan band.
Pan-sharpening techniques can result in spectral distortions of satellite imagery as a result of the nature of the panchromatic band. The Landsat panchromatic band for example is not sensitive to blue light. As a result, the spectral characteristics of the raw pansharpened color image may not exactly match those of the corresponding low-resolution RGB image, resulting in altered color tones. This spectral distortion resulted in the development of many algorithms that attempt to reduce it and to produce visually realistic imagery.
There are several fusion techniques like Wald Protocol (Wald et al., 1997 ), Zhou protocol (Zhou et al., 1998), Quality with No Reference index (Alparone et al., 2007) and the Zhang data fusion technique (Zhang, 2004).
Alparone, L.; Wald, L.; Chanussot, J.; Thomas, C.; Gamba, P.; and Bruce, L. M. 2007. Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S Data-fusion Contest. IEEE Transactions on Geoscience and remote Sensing, 45(10):3012–3021.
Wald, L.; Ranchin, T.; and Mangolini, M. 1997 Fusion of Satellite Images of Different Spatial Resolutions: Assessing the Quality of Resulting Images. Photogrammetric Engineering & Remote Sensing, 63(6):691–699.
Zhang, Y. 2004. Understanding Image Fusion. Photogrammetric Engineering and Remote Sensing, 66:49–61.
Zhou, J.; Civco, D. L.; and Silander, J. A. 1998. A wavelet transform method to merge Landsat TM and SPOT panchromatic data. International Journal of Remote Sensing, 19(4):743–757.
In ERDAS you should select Image Interpreter > Spatial Enhancement > Resolution merge tool. Here there are three methods (Principal Component, Multiplicative and Brovey). There are also other methods under the Spatial Enhancement tool such as Mod. IHS, HPF, Wavelet and Ehlers.
In ERDAS be sure to read the documentation for any fusion/resolution merge method of interest. Some methods may be more appropriate than others depending on the ratio of high resolution to low resolution and the number of bands to be fused.
Here some explanantion on the steps to follow in pansharpening.
Pansharpening is the process of merging high-resolution panchromatic and lower resolution multispectral imagery to create a single high-resolution color image. Google Maps and nearly every map creating company use this technique to increase image quality. Pansharpening produces a high-resolution color image from three, four or more low-resolution multispectral satellite bands plus a corresponding high-resolution panchromatic band:
Low-res color bands + High-res grayscale band = Hi-res color image
The band combinations are commonly bundled in satellite data sets, for example Landsat 7 includes six 30 m resolution multispectral bands, a 60 m thermal infrared band plus a 15 m resolution panchromatic band. SPOT, GeoEye and DigitalGlobe commercial data packages also commonly include both lower-resolution multispectral bands and a single panchromatic band. One of the principal reasons for configuring satellite sensors this way is to keep satellite weight, cost, bandwidth and complexity down. Pan sharpening uses spatial information in the high-resolution grayscale band and color information in the multispectral bands to create a high-resolution color image, essentially increasing the resolution of the color information in the data set to match that of the panchromatic band.
One common class of algorithms for pansharpening is called “component substitution,” which usually involves the following steps:
Up-sampling: the color bands are up-sampled to the same resolution as the panchromatic band;
Alignment: the up-sampled color bands and the panchromatic band are aligned to reduce artifacts due to mis-registration (generally, when the data comes from the same sensor, this step is usually not necessary);
Forward transform: the up-sampled color bands are transformed to an alternate color space (where intensity is orthogonal to the color information);
Intensity matching: the intensity of the color bands is matched to the pan band intensity in the transformed space;
Component substitution: the pan band is then directly substituted for the transformed intensity component;
Reverse transform: the reverse transformation is performed using the substituted intensity component to transform back to the original color space.
Common color-space transformation used for pan sharpening are HSI (Hue - Saturation - Intensity), and YCbCr. The same steps can also be performed using wavelet decomposition or Principal Component Analysis by replacing the first component with the pan band.
Pan-sharpening techniques can result in spectral distortions of satellite imagery as a result of the nature of the panchromatic band. The Landsat panchromatic band for example is not sensitive to blue light. As a result, the spectral characteristics of the raw pansharpened color image may not exactly match those of the corresponding low-resolution RGB image, resulting in altered color tones. This spectral distortion resulted in the development of many algorithms that attempt to reduce it and to produce visually realistic imagery.
you might also want to refer to "Zhang, Y., Pan-sharpening for improved information extraction, in Advances in Photogrammetry, Remote Sensing and Spatial Information Sciences. 2008, CRC Press. p. 185-203". There you find a table with applications, advantages and limitations of different techniques. It might help you to find the appropriate algorithm.
I would like to invite you to join my group on LinkedIn "RS Image Fusion": http://www.linkedin.com/groups/Remote-Sensing-Image-Fusion-5105581?trk=my_groups-b-grp-v. I am about to send out a questionnaire to collect experiences with RS Image Fusion to compile an image fusion atlas. Would be great if you would join to share your experiments with the rest of the image fusion research community!
Is it possible to do pan-sharpening on raw images, if both pan image and multispectral images are from the same sensor??- I mean before any radiometric calibration and atmospheric correction- and then convert pan-sharpened images to reflectance images.