08 November 2017 10 8K Report

Recently, I attended a stats-related seminar and the speaker was very adamant that P-value hypothesis/significance testing is over-emphasized and over-used in research. His driving point was that the P-value is an all-or-nothing arbitrary (in terms of the alpha we set) value that is strongly impacted by sample size (which I totally agree with). Meaning of course that with smaller sample sizes we have a greater risk of Type II error since it's harder to tease out a significant effect, while very large samples almost guarantee a significant effect, when there very well might not be one.

Yet, I still felt that P-values must have a purpose. For example, how can we make sure we don't commit a Type I error, where we reject the null hypothesis when there is actually nothing going on? After all, if we simply use effect sizes, which ignore sample size, then small sample sizes in our data would seem to be especially susceptible to fluke, luck-of-the-draw outliers that lead us to believe there something happening when nothing is.

Well, I posed this question to him at the end of the seminar, and he told me that confidence intervals could be used instead. I'm not sure if conceptually I successfully wrapped my head around what he was saying or not--but if I interpreted him correctly, then what purpose does a P-value actually serve? If confidence intervals can do that, and more, then why are P-values even used?

Thank you for any insight on this topic!

Kris

More Kris Ramonda's questions See All
Similar questions and discussions