I know at least two test: Egger et al (1997)1; and Begg & Mazumdar (1994)2; both testing the asimetry of a funnel plot, but with diferent statistical approaches. Egger is based in a regression model (so, technically it's required all regression assumptions); and Begg & Mazumdar is based on Kendall's non-parametric correlation (so, it does not need normality assumptions). Nevertless, both test assumed that the studies al came from the same population, thats mean, there is not heterogeneity between studies. So, if you reject the heterogeneity test, technically, you shouldn't use any of this tests. The are other tests (like Harbord's test3 for binary data), but i cannot tell much about them.
If you use R, i highly recommended you the book from Schwarzer et al. (2015)4 Meta-analysis with R. This book, usgin the packege "meta", is a very nice and practical book on meta-analysis, and in the Chapter 5 (5.2 "Statistical Tests for Small-Study Effects" from pp. 115-124) dicusses all statistical test for publication bias. If your institution has a subscription in SpringerLink, you can download the book for free.
References:
Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. Bmj, 315(7109), 629-634. doi: 10.1136/bmj.315.7109.629
Begg, C., & Mazumdar, M. (1994). Operating Characteristics of a Rank Correlation Test for Publication Bias. Biometrics, 50(4), 1088-1101. doi:10.2307/2533446
Harbord, R. M., Egger, M., & Sterne, J. A. (2006). A modified test for small‐study effects in meta‐analyses of controlled trials with binary endpoints. Statistics in medicine, 25(20), 3443-3457. doi: 10.1002/sim.2380
Schwarzer, G., Carpenter, J. R., & Rücker, G. (2015). Meta-analysis with R. New York, NY: Springer. doi:10.1007/978-3-319-21416-0
Great answers by Julio and good practical resources.
I want to suggest that you look at the issue more broadly. Publication bias is almost always likely: due to the search for articles and papers, you will always miss some grey literature or file-drawer never-finished material, which probably has smaller effects. The tests that Julio mentions run a check building on these assumptions, looking for anomalies in the distribution of effect sizes, in particular, an overrepresentation of large effect sizes with small sample sizes/large standard errors.
However, you should also look at your actual search strategy. Based on the criteria, how likely is it that you uncovered most if not all of the grey literature? If you limited your search to published material in English-language (high-impact) journals: very likely. You can also check for significant differences between unpublished and published articles.
Both tests that Julio mentions are included in the simple tool for meta-analysis Meta-Essentials. check www.meta-essentials.com or http://onlinelibrary.wiley.com/doi/10.1002/jrsm.1260/full
I didn't know meta-essentials. A great tool for my collegues that aren't familiarized with R! Thank so much.
And Faisal, Robert mention a very good point. Much of how to deal with publication bias is from a methodological point of view: your search strategie and how you dealed with non published results (grey literature e.g. thesis, conference papers, etc.).
No need to repeat the same suggestions, but I also recommend using Egger's Test and the Funnel Plot to identify asymmetries or outliers. I feel that publication bias is most likely present, and these tests are simply to identify the likelihood of potential bias and not statistically control for it.
A couple other suggestions for your analysis: The Fail-Safe N is a way to estimate the number of null effects that would be needed to overturn the significant overall mean effect. Typically, we believe that if your estimated Fail-Safe N exceeds 5k+10 (where k is the number of effects in your analysis) your results are considered robust and you can feel confident that your results would not be overturned by a few missing or unpublished studies. The references are below.
Rosenberg M. The file-drawer problem revisited: a general weighted method for calculating fail-safe numbers in meta-analysis. Evolution. 2005;59(2):464-8.
Rosenthal R. The "file drawer problem" and tolerance for null results. Psychol Bull. 1979;86(3):638-41.
Finally, another method that we have used to potentially identify publication bias is to categorize all of the effects by primary aim, comparing effects gathered from studies where your outcome of interest was the primary aim of the original manuscript versus effects gathered from studies in which your outcome of interest was not a primary aim. We might suppose that effects gathered from studies in which your outcome of interest was not a primary aim are more likely to be null.
Thanks for all the sharing. Not sure though if publication bias is prevalent if we use AD or whether IPD can help reduce the probability. Just curious as am just learning up about meta-analysis.