I am working on medical image fusion of MRI and PET. how can i find the quality using psnr? what should be taken as reference image? Is there any other good quality metrics for fused images? please suggest.
In our publication entitled: "Improving signal detection in emission optical projection tomography via single source multi-exposure image fusion," although the fusion was meant for multi-exposure projections of the same modality, the performance measurement was obtained using human-visual system (HVS) scoring since we did not have access to ground truth (reference data). See the paper and the following table for more info:
Table 2.Comprehensive data from HVS ranking of images
1. For reference based fused image quality assessment you can use the following measures:
When the reference image is available, the following quality metrics such as root mean square error (RMSE), spectral angular mapper (SAM), relative dimensionless global error (ERGAS), mean bias (MB), percentage fit error (PFE), signal to noise ratio (SNR), peak signal to noise ratio (PSNR), correlation coefficient (CC), mutual information (MI), universal quality index (UQI), structural similarity index measure (SSIM), etc. are used to evaluate the quality of the fused image. Reference image is the MS image at the resolution of PAN image .
2. If you have no reference image you can use the following measures:
When the reference image is not available, the quality of the fused image is evaluated by using the following quality metrics such as standard deviation σ , entropy (He), cross entropy (CE), spatial frequency (SF), fusion mutual information (FMI), fusion quality index (FQI), fusion similarity metric (FSM).
Please refer to the following references for more information if needed:
1. A Review of Quality Metrics for Fused Image.
2. Metrics for Measuring the Quality of Fused Images.
PSNR/SSIM and reference-based quality metrics are not valid measurements here, image fusion comprises merging two or more images to produce an output with a better quality than the input images. So, if you suggest using PSNR/ SSIM, can you then think/answer the following?
# which one of the different original images will you consider as a reference?
# if the PSNR is high, that means that the fused image is closer to the reference image, is this the objective of image fusion?! No.
Non-reference (blind) quality metrics are more relevant, such as PIQE, NIQE, BRISQUE, etc.
Abbas Cheddad, what are your thoughts on a simple approach of using a LUT transformation? As a quick solution, it allows for an immediate and straightforward qualitative assessment, and can also be analysed quantitatively for spatial/intensity entropy (if needed). The attached image is from Abbas Cheddad's very cool paper, mentioned already:
I used a feature from my ImageJ plugin (linked at the bottom) to batch assign look-up tables to a grayscale-downsampled image the paper (IFT-OPT on the right). The most informative palette is likely the one in the upper right (Glasbey). This LUT is typically used to detect compression artifact, which would appear as regions of uniform signal. But I think it can inform the presence of information complexity, especially when paired with interpretable alternative LUT transformations. Also it's statistically distinct if compared with a stochastic noise control in tandem with non-'sorting' LUT .
The edges LUT in the upper left is another option. The other, quite beautiful, linear LUTs transforms make clear that there is more data in the image fusion/interpolation IFT-OPT versions on the right. Such additive effect is rare in post-processing filters and would be distinct from what would be found given a result from an unsuccessful fusion algorithm run.
What do you think?
ImageJ plugin paper and site:
Article A Versatile Macro-Based Neurohistological Image Analysis Sui...
https://ijmacros.com
EDIT: Please keep in mind the LUT's derived from an RGB image uniformly downsampled to 8-bit, without even color weighting. Per-color-channel transformation with appropriate z-overlay or as a projection would most likely preserve contrast loss in distal areas.
Or maybe not... it's possible a single exposure is preferable for the feature of interest - every non-recoverable transformation will destroy some information - no magic AI.
Miky Timothy, thanks for enriching and colorizing this thread :) You always furnish good insights. LUT transformation could be probably a good pre-processing step for extracting further measurements. But before putting too much faith in LUTs, I would try to investigate the following:
- What could be the ideal (e.g., sensitiveness to artifacts) quantitative measurement to apply on the generated LUT transformed map? Will it be performing better than if we were to apply it to the original signal?
- LUTs are color/intensity transformers, image fusion is often associated with structural enhancement also, I am not certain that LUTs capture this angle.
The maps you referred to as "Glasbey LUT" are analogous to combining noisy channels, the three lowest bit planes (1-2-3 LSBs).