If case control studies are to be included in a meta-analysis of a cancer that has a low incidence in the, how many studies must be included in that meta-analysis?
The more studies you have the more robust will your synethisezed effect be. Nevertheless, I would assume that the statistical power of the original studies will depend largely on at least three factors: 1) the rate of the cases to controls, 2) the strength of the association between the predictor(s) and the case-state, and 3) the base-rate of the predictor in the total population of cases and controls. And the lower the statisitcal power is in the original studies, the more studies you need in order to get a robust finding. Still, combining data from two studies is better than having one, and thirty is better than two. I would however think it uncommon to meta-analyse less than, say, five studies. But this is simply a matter of opinion.
For a study to be included in such an analysis should preferably have an experimental group and a control group. Many studies are not good enough for a meta-analysis. It can be quite difficult to find relevant studies. A meta-analytic database contains detailed information from many quantitative studies, for example, methods, individuals, treatment. It is therefore important to record how they have chosen their studies, what boundaries you have done, what keywords were used in the databases, and more. A bias is that studies with negative results are not published. You can add more studies later on, and how many you should include depends on their scientific quality. The more the better!
It would be better to exclude studies that have been published in PUBMED or elsewhere as an abstract and not the full text article. Furthermore, studies with negative results should be excluded according to QUOROM guidelines. What if there are a lot of studies on a relevant topic with only abstracts published???
It would be your job to try to obtain the full-lenght articles. Unless you can access them through the Internet search-engines I suggest looking for them at the authors' home-pages, here on Research Gate, or e-mailing the corresponding author. Usually it is possible, with a bit of work, to get hold of the article. Recall that the hardest work of most meta-analyses is the literature search. Coding, analysing, and reporting is often far less time consuming.
I don't really understand why one would want to exclude negative findings. Inclusing just a subset (in this case, positive findings) will lead to over-estimating the effect size. What is the argument for leaving negative studies aside? Do you know?