Arne Andersen 's response should help! If you have chemical compositions you should be using log-ratio transformations (which may also normalise the data, but that's not the main reason you'd use log-ratios - it's to remove fixed-sum closure which can do weird things to relational measures such as correlation, regression, and ordination).
It should be done according to the objectives of the reaearch study and the expected results. However, some of the most common tests that determine the normality of the data set are as follows:
I agree with Dr. Farhang - the overall approach to normalization should first be guided by the questions/applications on which you are working, along with basic knowledge of the data.
For example, much environmental data is collected as time series data that should be evaluated for seasonality and trend. If the data are spatially heterogeneous, the spatial autocorrelation should also be incorporated in your "normalization" and any analyses you perform on the data.
A useful transformation for spatially heterogeneous time series data (e.g. monthly soil moisture by climate division) would use a Z-transform: z(i,j) - (x(i,j)-x_bar(i,j)/sigma(i,j). Where x(i,j) is the observation at spatial unit i in season j; x_bar is the corresponding mean; and sigma(i,j) the sample variance. In this way the transformed values: z(i,j) have the interpretation of data values in units of standard deviations about the mean and can be compared between both seasons and geographic regions.
Finally, important to be rigorous to define points as "outliers" and think about the meaning of an observation that is an "outlier" versus observing an extreme event in the context of your data.