The theory of confidence intervals (CIs) was developed by Neyman in 1930’s. It is the theoretical base of GUM’s uncertainty framework. However, the CI-based approach for uncertainty estimation has been challenged for years. It now becomes clear and accepted by many statisticians and scientists that CI is not a procedure for statistical inference. Indeed, CI is one of the most confusion concepts in statistics and is often misinterpreted or misunderstood. The confusion about CIs is not only among practitioners, but also among statisticians or experts. In their recent paper entitled “The fallacy of placing confidence in confidence intervals,” Morey et al. (2016) stated, “We have suggested that confidence intervals do not support the inferences that their advocates believe they do. …… we believe it should be recognized that confidence interval theory offers only the shallowest of interpretations, and is not well-suited to the needs of scientists.” A CI procedure merely generates a collection of CI “sticks” with a specified “capture rate” (i.e. confidence level). It is not a method for inferring measurement precision from a sample at hand. Morey et al. (2016) suggested to abandon CIs from science. Matloff (2014) purposely excludes the t-test and t-interval in his textbook. The international journal Basic and Applied Social Psychology (BASP) has officially banned CIs and the null hypothesis significance testing procedure (NHSTP) since 2015 (Trafimow and Marks 2015).

More Hening Huang's questions See All
Similar questions and discussions