What are the alternative statistical quantitative synthesis methods for combining studies, if we can't apply meta-analysis because of the heteronegeity or the shape of the data, for the data set of studies we got?
I'll third the answer. An appropriate meta-analysis uses well defined a priori systematic review strategies to identify studies to include based on a well founded objective. Ensure you're combining studies that fit your meta-analystic objective based on your systematic review. Sometimes you end up with heterogeneous studies that fit your inclusion criteria: it's an interesting problem to have. Don't throw away a study because its problematic (ie too different from the rest). The Cochrane Collaboration has some helpful tips on good systematic review / meta-analysis on their website.
An alternative is to do the plots and tables suggested in the Cochrane guidelines and elsewhere, but not to pretend that the studies are representative of the population you are interested (or representative after weighting) and not press those extra buttons and get some population estimates that are unrealistic.
It varies by discipline, but in some areas the studies are different enough that this more descriptive approach is more honest. Other areas the studies may be more similar. This will depend on the area that you are doing the meta-analysis for.
I'm with Daniel on this one. If the studies are a heterogenous mess, and this is usually clear when you compare the settings, participants and methods, then there is no point in concluding anything other than it's time to do some proper research.
On the other hand, if the participants, methods and settings are reasonably consistent, then you would be justified in treating the effect as random and analysing it as such.
Everyone is right! My take on it is that random effects take the heterogeneity into account, by basically saying that there isn't one right answer. The pooled estimate is just some average of the right answers.
You certainly want to explore all that heterogeneity and meta-regression is one way of doing that. This can be really informative and is like saying there's one answer for one subgroup and a different answer for the other subgroup, not one right answer.
But sometimes there's just too much heterogeneity and it can't be explained away, and you're just left with your graphical summary of the individual studies. But there's no way of doing meta-analysis without doing meta-analysis, if that's what you were looking for.
I think that there is a difference between random effects, where the same factor or treatment produces varying effects in different populations or settings, and design heterogeneity, in which the study methodologies are incompatible one with another.
In the latter case, we are breaking the fundamental premise of meta-analysis, which is combining studies of the same factor/treatment in the same population measured against the same endpoint.
So, how about the IPD (Individual Patient Data) Meta-Analysis method or Single Case Meta-Analysis method to apply column data of studies? I have only column datas from each study and these column data includes household expenditure elasticity values. So, if meta-analysis or other methods are not appropriate for this data, can i use systematic review method to combine these studies? Or which other quantitative methods are appropriate for this data to combine the studies?