Empirical data collected by ecologists (and also scientists from other areas) are known to often fail to fulfil the assumptions needed in parametric hypothesis tests. After a long history of scientists trying to hammer their data from every side so that they would fit the assumptions, or even ignoring some "mild violations" because the parametric test would still be more robust, we are in a time where everyone has access to (more or less) powerful computers.

This has led to an increasing use of non-parametric statistics that are able to test hypotheses and make predictive models based on "real data", which can be achieved either through large datasets or data generation based on simulations, re-sampling, permutations, etc. My question comes from the fact that when I ask for advice on data analysis I often get a lot of answers regarding the fulfilment of assumptions or suggesting the use of parametric tests after data transformations.

Of course data transformations are still very important to standardise units of measurement or to reduce the effect of some variables, but do you think normality assumptions are still an issue? Homogeneity of dispersions can still be an issue of course, but it can also be seen as an important feature of the system itself (the fact that one "treatment" has more dispersed or erratic results than another). Are parametric tests becoming the floppy disks of data analysis? Or are there still areas where parametric approaches will always perform better?

Similar questions and discussions