So, the way things are right now is that a significant result should be followed by considering the size of the effect; because statistical significance is one thing and practical significance is another.

But what do I do when I get a non-significant result. It seems to me that I can’t just conclude that there was an effect (e.g., d = .5) but I lacked power and therefore if I increase the sample size I will find a significant d = .5. That is, I cannot conclude that there was an effect despite the fact that there was an effect (i.e., d = .5). I can’t draw this conclusion because the because of the non-significant result which means that the resulting effect size cannot be trusted.

Perhaps the effect size I found was due to chance and a properly powered study might come up with a completely different effect size. So, increasing sample size may result in a completely different conclusion. Basically, a significant result prompts me to consider the effect size but with a non-significant results I can conclude very little. Am I understanding this correctly?

So, if recent years has thought us that a significant result is NOT the firm proof of an effect we so desire; what about a non-significant result?

More Daniel Pettersson's questions See All
Similar questions and discussions