Hi,

I am planning a follow-up experiment to a previous study and have a question about estimating sample size:

For the sake of simplicity, the previous study showed that Condition 1 had more accurate responses (64%) than Condition 2 (41%) and a paired samples t-test was significant (t(12)=3.43,p=0.005, d=0.9, two-tailed). This finding has recently been challenged because the stimuli are somewhat confounded. My new experiment investigates whether a significant effect will be observed between condition 1 and condition 2 when new, more appropriate stimuli (after removing the confound) are used.

The problem is that when the new stimuli are used, the difference between conditions will *probably* be much smaller because we're supposedly removing a confound (e.g., ~10% rather than 23% difference), although we still hypothesise it will be significant. To calculate the sample size needed for a significant effect with the new stimuli, I will need to estimate cohen's d but it's almost impossible to estimate this precisely because I don't know how much the confound contributed to the initial effect. So, what cohen's d should I use for my power calculation? I see 3 options:

(i) I simply predict what cohen's d may be by reducing the original finding (e.g., from 0.9 to 0.4)?

(ii) specify a minimally desired cohen's d that would satisfy the psychological reality of the effect (e.g., based on d=0.3, power=0.8, p=0.05, the estimate is 89 participants)? But what if, after collecting the data my d is slightly below this yet still meaningful (e.g., d=0.25)?

(iii) collect preliminary data (I've sometimes seen this suggested but it seems counter-intuitive to the whole pre-planning idea).

Many thanks for any insights!

Ryan

More Ryan Blything's questions See All
Similar questions and discussions