Hi all, I'v conducted a pilot RCT to assess the efficacy of a learning tool in psychology students. I only was able to recruit 22 participants (13 VS 9). Should i report p values and statistical significance, keeping in mind the SMALL sample size?
Actually: no. It is not the aim of a pilot study to asses the statistical significance of the data under a hypothesis but rather to estimate relevant characteristics of the random variable that can be used for inference in further studies.
It is not quite clear to me what you mean with "report". If you mean to report it to your boss/client/principal then the p-value is (or should be) completely uninteresting, just as I said above. But if you think of a publication, I wonder why you would publish a pilot study.
Actually: no. It is not the aim of a pilot study to asses the statistical significance of the data under a hypothesis but rather to estimate relevant characteristics of the random variable that can be used for inference in further studies.
It is not quite clear to me what you mean with "report". If you mean to report it to your boss/client/principal then the p-value is (or should be) completely uninteresting, just as I said above. But if you think of a publication, I wonder why you would publish a pilot study.
I think that some ... back at the beginnings of p-value use in the early 20th century ... were taught to look at a p-value early in a study to assess if they wanted to keep going, but I don't accept that logic. It seems too arbitrary. (P-values are not that useful. "Significance" is a misnomer.)
A pilot study can help you estimate standard deviations* and work out operational issues, and perhaps even learn about issues you had not considered.
*Jochen notes "...to estimate relevant characteristics of the random variable," one of which is standard deviation used to 'estimate' sample size requirements in the full study.
I second your comments. A pilot RCT is never intended to test the significance of observed differences between groups, only to provide useful data for planning subsequent larger trials (i.e. estimates of variability and effect size, some relevant experience in how many patients must be screened in order to enroll each "study" patient, estimates of attrition / dropout vs. completion of the full study protocol - the things James has cited, "operational issues" - so you can determine how many pts will need to be screened to enroll XXX in the larger trial, how many study sites are needed, etc).
Unfortunately, although most academic physicians should understand that those are the chief goals of a "pilot" study, they often ALSO want to publish their findings as a mini-trial, to illustrate that they are working in this area and presumably win some prestige or call "first!" into an area. Since they are so conditioned to seeing the p-values and "outcomes" in published Phase 2/3 trials, they also often assume (because it's what they're used to) that it is also necessary to provide the would-be "primary outcomes" in each group and significance tests even for their pilot study.
Humorously, I am often asked to provide power calculations for pilot studies; the MD will tell me "'I'll be able to enroll 25 patients over 1 year. Can you give me a power calculation that shows we'll be able to detect an effect?" which comes back to the point above, it's a basic misunderstanding of the purpose of a "pilot" trial.