11 November 2014 15 8K Report

I'm trying to figure out if in general, if I perform contrast enhancement preprocessing (eg. histogram equilization, adaptive equilization or constrast stretching), how this will impact the subsequent segmentation step.  

Imagine I had a workflow in which I segment using an autothresholding method like max-entropy.  In imageJ, I noticed that if I normalize the histogram, and do the max-entropy segmentation with the default parameters, the results are identical to the eye.  IE the segmentation on the original image and the preprocessed image looks the same, indicating the preprocessing operation did not impact how the segmentation would turn out.  I'm trying to figure out if this is a general result (ie as long as the histogram is still a guassian, it doesn't matter that it's equalized or not to the segmentation algorithms), or if this might be particular to my image, or just the histogram equilization method, and other contrast enhancement routines will tend to lead to different segmentation results.

I know it's a general question, just looking to be pointed in the right direction.

Similar questions and discussions