Very nice answer from Roger, especially about meta-regression (what you might also read as "mixed-effects models" in the meta-analysis literature. To add to Roger's response, you can think of the fixed effects model as providing statistical inference only about the studies included in your analysis; you can use it to ask what the (weighted) average effect is among your studies. You can consider the random effects model so test similar things (the average), but now provides inference about the larger population of studies from which your studies are drawn. The random effects model tests for significant heterogeneity among the effects (tau squared). Significant heterogeneity (from the Q statistic and p value) is often cause for doing a meta-regression to explain that heterogeneity.
Viechtbauer (2010) has a very nice explanation and step by step walk through of these analyses tailored for R, but you may find the information useful regardless of your statistical software:
There is no answer to you question without looking at your data. First, you can conceptually understand that the studies included are a random sample of all studies and they are measuring a "random" effect. Then, you can use random effect. If they are measuring an effect that only has sampling variation and not "effect variation", then you can use fixed effect.
Importantly to note is that, in case of heterogeneity (specially for fixed effects) you should not simply use random effects because it does not account for all heterogeneity. You need to explain heterogeneity (effect modification) using meaningful variables that can alter the effect you are measuring. To do this you can use meta-regression.
Very nice answer from Roger, especially about meta-regression (what you might also read as "mixed-effects models" in the meta-analysis literature. To add to Roger's response, you can think of the fixed effects model as providing statistical inference only about the studies included in your analysis; you can use it to ask what the (weighted) average effect is among your studies. You can consider the random effects model so test similar things (the average), but now provides inference about the larger population of studies from which your studies are drawn. The random effects model tests for significant heterogeneity among the effects (tau squared). Significant heterogeneity (from the Q statistic and p value) is often cause for doing a meta-regression to explain that heterogeneity.
Viechtbauer (2010) has a very nice explanation and step by step walk through of these analyses tailored for R, but you may find the information useful regardless of your statistical software:
The basic assumption of meta analysis is that the studies including have random variation and that the results are intended to be generalized to the wider population otherwise why are you doing a meta analysis is the first place. The argument for one true effect size in a fixed model is almost inconceivable to imagine ... in biological systems, so in epidemiology we would almost always use a random model for meta analysis for a lot of reasons (with the exception of some occupational health and community exposure assessments). However, in other fields I can imagine a fixed effect model. So if you think of the studies themselves as a random sample among the universe of potential samples then you should use a random effects model. If you are estimating the same effect size across studies then the fixed effect model should be used.
If there is no heterogeneity between results of included studies in meta-analysis you can use fixed effect model. If there is heterogeneity you should use random effect model.
Roger is right: this question must be answered at the level of the research question, not answered statistically.
If you have an effect that should be universal, for instance the effect of aspirin on platelets, for instance, then maybe a fixed effects model might be appropriate. However, in real life effects vary locally, and a random effects model is more in keeping with what we know. If you are looking at the effect of treatment, the treatment context (prevalence of the condition, diagnostic pathways, healthcare resources, even the general health of the population) will make a difference to the observed effect.
For this reason, I would not do a statistical test to decide on fixed or random effects. Such tests have low power to detect heterogeneity if the number of studies is small. Better to use prior knowledge to guide the process.
This is not to decry measuring heterogeneity. It has an important role in alerting us to possible systematic differences between studies.
To add to these great explanations, I would also think it's important to interpret the I2 statistic when examining the between-study heterogeneity in a random-effects model. That is, how much of the between-study variability is true heterogeneity. The larger I2 the more potential you have to explain the variability in your sample of studies (e.g., via a priori moderator analyses/meta-regression). It'll be important to use the information from multiple heterogeneity statistics to guide your analyses and interpretation (Tau2, I2 , and Q).
When the choice in meta-analysis is between fixed and random effects (models) then most certainly the fixed effect is the only appropriate effect model to use. Yes, it does suffer from overdispersion (computed standard error too small) as indicated above, but that can easily be fixed if you use the IVhet approach available through the metaXL software at www.epigear.com.
Of course Cochrane and other organizations may provide you with conflicting statements and provide guidelines for decision making regarding choice of model (including the heterogeneity criteria as mentioned above). These are all incorrect for the simple reason that the random effect estimator ALWAYS will have a larger variance and MSE than the fixed effect estimator REGARDLESS of heterogeneity and therefore serves no purpose. The original aim for the creation of the conventional random effects model was to solve overdispersion and not only has this not been solved but indeed it resulted in an estimator that performs very poorly compared with the fixed effect estimator.
There might be a lot of disagreement on this issue, but I am yet to hear of any logical reason why we should use the random effects estimator (short of the overdispersion which has been solved with the IVhet model)
Here is a nice overview of the differences between fixed and random effects meta-analyses, with many references at the end which might be helpful: http://www.bmj.com/content/342/bmj.d549
Thank you all. However, univariate and mutlivariate meta-regressions have different purposes. When is a univvariate meta-regression sufficient? For a multivariate meta-regression, a rule of thumb is 10 studies per explanatory variable, and quite often there is a large number of missing data in some of the variables to be entered in a multivariate meta-regression. Would for example undertaking a multivariate meta-regression with 20 studies to model up to ten variables be acceptable?
PS: and regarding your last question, I'd say no. Think as a linear regression with 20 observations and 10 independent variables: almost impossible to run, especially if you have binary predictors (which are more likely in meta-regression). We recently had a meta-regression with 70-80 studies and still struggled to analyse more than 5-10 predictors.
Dr Kumar, thanks for your response, A weighting sensitivity plot is used to assess to what extent the effect model (fixed or random) may influence the results. This is meta-analysis dependent and tells us nothing whatsoever about which estimator has the lesser variance and MSE and that is actually what is of interest. A simple simulation will demonstrate that the fixed effect estimator always has a lesser MSE and therefore performs better and all the theories above about choice of estimator are indeed wrong. Yes, fixed effect estimators are biased, but since we only do a meta-analysis once, the lower MSE estimator will be closest to to reality irrespective of bias. The only situation where the MSE of random and fixed effect estimators come together (because of bias in the latter) is when there are hundreds of studies in the meta-analysis - really unrealistic situation. Keep in mind that if you do decide to use the fixed effect estimator for heterogenous studies, only the IVhet approach is valid as this has a corrected variance (corrected for overdispersion).