Theoretically, normalisation is used to transform data to make them comparable and often unitless. The best way to normalise is to divide each element in a data set by the largest value thereby making the largest element to be 1.
If you are more specific, I should be able to suggest better.
I believe that the answer relies on how clustering is performed.
Depending on the algorithm you use, for example DBSCAN, which is based on fixed thresholding, such values may make sense when data is normalized and not when the data is not.