Dear Dr. Ashok, techniques for sample size calculations are described in most conventional statistical textbooks. However, the wide range of formulas that can be used for specific situations and study designs makes it difficult for most investigators to decide which method to use. Moreover, these calculations are sensitive to errors because small differences in selected parameters can lead to large differences in the sample size. I had gone through an article by:-
Wittes J. Sample size calculations for randomized controlled trials. Epidemiol Rev. 2002;24:39–53.
Few statistical conceptions of sample size calculation in RCT design:
The null hypothesis and alternative hypothesis.
In statistical hypothesis testing, the null hypothesis set out for a particular significance test and it always occurs in conjunction with an alternative hypothesis. The null hypothesis is set up to be rejected, thus if we want to compare two interventions, the null hypothesis will be “there is no difference” versus the alternative hypothesis of “there is a difference”. However, not being able to reject the null hypothesis does not mean that that it is true, it just means that we do not have enough evidence to reject the null hypothesis.
α/ type I error
In classical statistical terms, type I error is always associated with the null hypothesis. From the probability theory prospective, there's no such thing as “my results are right” but rather “how much error I am committing”. The probability of committing a type I error (rejecting the null hypothesis when it is actually true) is called α (alpha). For example, we predefined a statistical significance level of α =0.05, a positive P value equaled 0.03 was found at the end of a completed two-arm trial. Two possibilities for this significant difference can exist simultaneously (assuming that all bias have been controlled). One reason is that a real difference exists between the two interventions and the other reason is that this difference is by chance, but there is only 3% chance that this difference is just by chance. . Hence, if the p-value is more close to 0 then the chances of difference occurring due to “chance” are very low. To be conservative, a two-sided test is usually conducted compared to one-sided test, which requires smaller sample size. The type I error is usually set at two sided 0.05, not all, but some study design is exceptive.
β/ type II error
As null hypothesis is associated with type I error, the alternative hypothesis is associated with type II error, when we are not able to reject the null hypothesis. This is given by the power of the research (1- type II error/β): the probability of rejecting the null hypothesis when it is false. Conventionally, the power is set at 0.80, for higher the power, the more sample is required.
Hope these additional links and publications will help.