Is there a simple way to compute the fail-safe N for a meta-analysis, when the only available data are effects sizes (d), their standard errors, and the sample size of every study?
I am not sure I understand what you are asking for? Are you concerned that your pooled effect size is not statistically significant and you want to know what the needed sample size is for this effect to be statistically significant? You may want to read the following article:
Sutton AJ, Cooper NJ, Jones DR, Lambert PC, Thompson JR, Abrams KR. Evidence-based sample size calculations based upon meta-analysis. Statistics in Medicine 2007; 26:2479-2500.
And there is a software program for Stata that is described in the paper (linked below)
I am looking for a simple way to estimate how many studies with non-significant effects would be needed to make the overall effect non-significant. It's a way of assessing publication bias.
I would also be interested in answers to this topic. I once did a meta-analysis and calculated the fail-safe N using Rosenthal's method. This calculation is relatively straightforward.
Rosenthal, R. (1979). The "File Drawer Problem" and tolerance for null results. Psychological Bulletin, 86(3), 638-641.
My problem is that following this calculation, I sometimes get the fail-safe number to be zero, which surely cannot be correct when I have a significant effect size of say d = 0.2.
I also have the same data types as you Mykolas, as that's what the fail-safe N requires.
Not sure if this is of any use to you guys but my program MAVIS (kylehamilton.net/shiny/MAVIS/) will compute fail-safe N using three methods as described here
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86, 638--641.
Orwin, R. G. (1983). A fail-safe N for effect size in meta-analysis. Journal of Educational Statistics, 8, 157--159.
Rosenberg, M. S. (2005). The file-drawer problem revisited: A general weighted method for calculating fail-safe numbers in meta-analysis. Evolution, 59, 464--468.
It looks like you have all the information required for publication bias analysis with the statistics you name. We developed some workbooks for Microsoft Excel that also provide Fail-safe N's (along with several other methods of assessing publication bias. Check http://www.meta-essentials.org. The same methods as William describes above, and:
Gleser, L. J., & Olkin, I. (1994). Stochastically dependent effect sizes. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 339–356). New York: Russell Sage Foundation.