Over 800 #Scientists signed in favor of a COMMENT in the #nature communications to "Retire #statistical #significance" in #researches: I believe statistics subject matter experts need to argue on the matter but, Only 16 out of over 800 signatories are affiliated to #Biostatistics department and ONLY 8 of them were in a #Mathematics and Statistics department. It was mentioned in the supplemental data that 51% (402/791) of reviewed papers #erroneously stated "no effect".

  • >How did we know that those are errors? Doesn't it mean that the variables of interest are not having effects, but there may be other unknown nuisance factors that are responsible for the noted differences? We are simply assuming then to be random errors.
  • > What was the power of analysis specified? Did we specify the power of Analysis and determine our sample size based on that?
  • >What did the sampling and design of experiment look like for the 791 papers? Were they the best for the type of research, data, distribution, ...etc?
  • >How representative were the samples of the considered population?
  • >How many replications have been used? Did we have enough number of DF for the estimation of the error? REPLICATIONS ARE ALWAYS LOWER THAN RECOMMENDED.
  • >Have the assumptions of the statistical models (ANOVA, GLM, BLUP, …etc) checked for the data to fit best for the type of analysis used? THESE ARE OFTEN IGNORED.
  • >How sensitive were the data? Have they used other significant levels?
  • >What is the practical relevance of the differences in the values on the scale like?

https://www.nature.com/articles/d41586-019-00857-9

More Tadesse Fikre Teferra's questions See All
Similar questions and discussions