You can use the results of meta-analysis to calculate the sample size. Essentially, you need the following to calculate the sample size for RCT (with a categorical variable as the primary outcome):
1. Risk in control group
2. Expected risk in intervention group (or the relative risk)
3. Probability of the two types of errors (which are usually fixed; alpha error being 0.05 and power (1-beta error) being 90%.
In a meta-analysis, you will get a pooled relative relative risk which can be used as the expected relative risk (point no. 2). The potential issue that can arise is how to estimate the 'risk in control group'. If you have a baseline data from your own unit, you can use that as the risk in control group. But if you don't have, you have to make an educated guess. You can use the average risk in the control group of all the studies included in the meta-analysis. The other option is to use the risk in the control group of a particular study conducted in a setting that closely mimics your setting. But you have to really justify such assumptions (of course, the assumptions and sample size estimation have to be done before you start the study; also, if the meta-analysis has shown a significant beneficial effect in the incidence of the primary outcome of your intended study, you might not be ethically justified in doing the study!)
If the outcome is a continuous variable, you can use the weighted mean difference as the estimate of the 'difference in means' between the two groups. But you need to estimate the standard deviation (SD) in the two groups. Again, you can use the SD in the study from a setting that closely mimics yours.
I hope this clarifies your query..please let me know if you need more clarification.
Thanks a lot Dr.Jeeva for your answer. It clarified my query.
But i would like to know wich is better , calculating sample size from results of study mimic mine or from Metanalysis result for studies addressing same issue???
I disagree. You should not use the results of published research when calculating sample size.
Sample size should be based on the minimum clinically significant effect size. You need to base it on
1. The outcome in the treatment-as-usual group (or placebo group if you are using placebo)
2. The smallest improvement in outcome that would be regarded as clinically significant.
The smallest clinically significant effect size may be bigger or smaller than the effect size observed in previous research. This has no relevance to the sample size calculation.
I'm with Ronán on not basing what you're powering your study to detect on what others have previously found. You might still be interested in something a bit smaller, and wouldn't want to miss out on that. Power it for what you consider is a clinically important effect. I guess that's "on average" at the population level, rather than just for an individual.
I agree it's a very subjective decision, and the one that has the most impact on the eventual estimated sample size required. But as a clinician, you'll be well placed to know what your outcome means, and what a given improvement would mean.
(unfortunately you can't say that any improvement is beneficial, because then you need an infinately large sample!)
I think both views are not contradictory. I agree you power your study to detect a clinically important effect, but you have to justify whether this is plausible . To show the plausibility you may cite that this effect is consistent with the data observed in previous studies. This consistency may be because the effect is similar to the pooled effect size of the meta-analysis, or it falls withing the range of possible effects under reasonable certainty (say 95% CI of the pooled effect size in the meta-analysis).