I am constantly frustrated by my colleagues and students getting their manuscripts returned either rejected or with suggested revisions based on inappropriate statistical advice. I consult across the full range of 'health research' and have noticed that particularly in some disciplines (e.g. pre-clinical) the advice reviewers give is either inappropriate or often wrong. How do we deal with this problem? Should the student just 'roll-over' and take this inappropriate advice, or should they 'fight it out'?

Let me give you an example: assumption checks. In many clinical studies, particular care is given to appropriately powering the study. That is, calculating the appropriate sample size to detect a particular clinically meaningful effect (e.g. between two means). I recommend that researchers use their eyes to gauge the (approximate) equality of variances between two groups (i.e. variance homogeneity is an assumption of the standard independent sample t-test), but many reviewers (no doubt SPSS users) will recommend the researcher perform a Levene's test for variance homogeneity. This is pointless for two reasons:

1. The study has been powered to detect a particular level of difference between two means which may or may not mean it is adequately powered to detect a difference between two variances.

2. If there was a 'statistically significant' difference between the two variances, does it follow that this difference is sufficient to cause problems for the (generally robust) t-test, or conversely (when we have small samples) does a non-significant result for the Levene's test mean we are safe.?

This is just one example. The main problems is that reviewers are often context experts who may have poor research methods knowledge and/or experience. There is clearly a major shortage of methodologists out their. How do we solve this problem?

Similar questions and discussions