09 September 2016 4 8K Report

Assume having conducted an experiment. You find a marginally significant interaction effect (10% significance) but, regretting not having performed a pre-experimental power analysis, you judge the power of the statistical test of the interaction as too low.

You decide on getting some more observations of some experimental groups, in order to increase the number of observations in this group and presumably the power of the respective statistical tests.

Regardless of how this changes the test-statistics, is this a reasonable approach? I somehow feel that it is not, but I am not yet able to recall the argument behind it.

Additionally, in case one would have proceeded as described above: Would it be better to analyse data and to present findings from both experiments (1) together, (2) separately. Or (3) to exclude the data from the second "additional" experiment completely?

Thanks in advance for replies.

More Hendrik Bruns's questions See All
Similar questions and discussions