Recently, we have made a comparative study to investigate the effect of different similarity measures and distances on the performance of the KNN classifier, you may find the study here:
Data normalization is important before measuring distances as stated by Waldemar, however each normalization/standardization method has its own weakness/strength and might not work well on a specific dataset,
e.g. minmax normalization is affected by outliers and drag most of the data to a small area keeping the same relative difference between extreme values and the rest of the data.
That why we invented our new distance metric which does not need normalization, as the effect of each dimension is bounded in the continuous range of [0,1] .
I agree with Ahmad Hassanat since data normalization may influence the data.
In my experience, we use image segmentation as prepossessing step before calculate image similarity and we found the similarity accuracy increased. However, if we apply normalization on image intensity before similarity measure, the accuracy have no change.
sometime we need normalization in order to make the difference between the features close, this means if some features have a strange value it will become more close to each other, which reduce intra and inter feature variation.