Hello,
I am currently undertaking a meta-analysis related to the efficacy of previous studies to harness pharmacological agents to ameliorate associative fear memories in rodents. The concept of effect sizes is fundamental to meta-analyses, but I am not so familiar with it. As a result, I have a (perhaps) elementary question related to the calculation of effect sizes:
I have decided that 'Hedge's g' will be the most appropriate metric of effect size for my analysis, given the relatively small sample sizes of my included studies. Often, authors will report an observation-of-interest (i.e. treatment vs. control effect) in the form of a (two-sample) t-test. I have found that the reported values of a t-test can easily be used to calculate a Hedge's g effect size - all good so far.
However, often an observation-of-interest is reported as an ANOVA (e.g. when the treatment group is compared with more than one control type), and other meta-analyses I have read seem to derive their effect sizes from both t-test or ANOVA statistics, depending on which is carried out. Again, I think I have found the correct equation to derive a Hedge's g effect size from the reported F-value of an ANOVA. However, I am struggling to understand how effect sizes can be comparable when they are derived from both t-test and ANOVA statistics. Surely ANOVA-derived effect sizes are less informative because you cannot tease out the individual contrasts-of-interest from an F value, as you obviously can for a two-sample t-test.
As a result, my initial instinct would be to request the raw data from the author (when ANOVAs have been reported), calculate a t-test for the specific contrast-of-interest (i.e. treatment vs. one particular control group) and calculate the Hedge's g effect size from the resulting t-test value.
Apologies for the long question, but I would hugely appreciate someone to enlighten me, and show me how to conflate t-tests and ANOVAs when generating effect sizes for a meta-analysis.
Many thanks!
Alex Nagle