In order to answer this question/problem, several remarks have to be studied.
1.Research studies are usually carried out on sample of subjects rather than whole populations. The most challenging aspect of fieldwork is drawing a random sample from the target population to which the results of the study would be generalized.
2.The key to a good sample is that it has to be typical of the population from which it is drawn. When the information from a sample is not typical of that in the population in a systematic way, we say that error has occurred. In actual practice, the task is so difficult that several types of errors, i.e. sampling error, non-sampling error, Response error, Processing error,…
In addition, the most important error is the Sampling error, which is statistically defined as the error caused by observing a sample instead of the whole population. The underlying principle that must be followed if we are to have any hope of making inferences from a sample to a population is that the sample be representative of that population.
3. A key way of achieving this is through the use of “randomization”. There several types of random samples, Some of which are: Simple Random Sampling, Stratified Random Sampling, Double-stage Random Sampling... Moreover, the most important sample is the simple random sample which is a sample selected in such a way that every possible sample of the same size is equally likely to be chosen. In order to reduce the sampling error, the simple random sample technique and a large sample size have to be developed.
4. The following factors are highly affected the sample size and need to be identified:
Population Size,
Margin of Error,
Confidence Level (level of significance) and
Standard of Deviation.
5. In order to estimate the sample size, three issues need to be studied, i.e. the level of precisions, confidence or risk level and the variability. Regarding the last issue, which your questions is concentrated the degree of variability in the attributes being measured refers to the distribution of attributes in the population. The more heterogeneous a population, the larger the sample size required to obtain a given level of precision. The less variable (more homogeneous) a population, the smaller the sample size. Note that a proportion of 50% indicates a greater level of variability than either 20% or 80%. This is because 20% and 80% indicate that a large majority do not or do, respectively, have the attribute of interest. Because a proportion of .5 indicates the maximum variability in a population, it is often used in determining a more conservative sample size, that is, the sample size may be larger than if the true variability of the population attribute were used.
6. The Cochran formula allows you to calculate an ideal sample size given a desired level of precision, desired confidence level, and the estimated proportion of the attribute present in the population.
Cochran’s formula is considered especially appropriate in situations with large populations. A sample of any given size provides more information about a smaller population than a larger one, so there’s a ‘correction’ through which the number given by Cochran’s formula can be reduced if the whole population is relatively small.
If you are asking how to calculate power and determine the suggested sample size, you can use simulation methods if one of the standard power packages is not appropriate. You need to decide what you want to design your study to do (e.g., how precise do you want your estimates, do you want to "test" hypotheses, etc.). You will need to provide more information to get more specific responses.