There are no rules, and I certainly haven't heard this one before. Where did it come from?
Small studies have low statistical power to detect anything but very large effect sizes. This is why they are used on basic research. Two groups of six rats/mice is pretty much standard, because investigators are only interested in really dramatic effects in early phase research, since most of these turn out to be blind alleys anyway.
The real question if if you have enough participants to be able to detect an important real-life effect size with any degree of power.
You may accept whatever you like, as long as you can give some rationale for choice a reader with some common sense can follow. However, there exist people (some editors, some reviewers) who seem to think that there is something like a rule. If your aim is to publish your research then you should have to ask the editor of the journal you aim at (and hope that the reviewers) will follow your rationale and accept your choice.
but, in this case, the threshold of 0.1 or 0.05 looks more like the confidence level of the test (and you are free to set it to the value you choose) and is not directly related to the probability (p-value)
.
i'm sure the above is pretty unclear ; my apologies.
maybe you could have a look at the following link (start with the "The testing process" section to see if it fits your problem)
maybe I wasn't clear. I meant: there is no rule. The rationale was not for the justification of any rule but for the justification to "reject the null" for some observed p-value (whatever value is may have). So: I may reject the null when p=0.13, for instance, but I should have some rationale or argument for that decision (why am I willing to reject this null in this case even while p=0.13? does it make sense? why do I think that this is not too liberal?).
I know scientists who (to my understanding: wrongly) think that one can reject the null only when p
I agree with Ronán Michael Conroy "The real question is if you have enough participants to be able to detect an important real-life effect size with any degree of power". Usually a 8 people study can be considered as a pilot study which data can be used for sample size analysis.
May be I have to be more clear, sorry. Ok this is the case. I made an experimental study with 16 students (8 experiment and 8 control group). I interpret the results of wilcoxon pre and post test accordin to p
Did you do pretest measurements? These would add to your statistical power.
Your real problem here is not significance, it is power. With two groups of 8 participants, you have just 50% power to detect a difference of 1 standard deviation between the groups (equivalent to a situation in which there is a 75% chance that a person from the better group will score higher than a person from the worse group)
You have 90% power to detect a difference of 1·7 standard deviations, which amounts to an almost total non-overlap between the two distribution. So anything other than a dramatic effect will be unlikely to be detected by a study that is so small.