Because the more information you have the more you usually know. the sample size in the simplest case is n=z^2 (variance)/bound^2. Thus the bigger the variance the larger the the sample size needs to be the get the same amount of accuracy in different estimates. thus the bigger the variance in the data the larger the sample size must be to get a fixed level of accuracy. WE set the accuracy we want and can then find the appropriate sample size. Wrong sample size can mean badly wasted resources or not enough final information. That is why it's important. Best, D. Booth for full details see an entry level statistics textbook.
An excerpt with a small number of elements does not give the expected results and there is a significant value of the default error in the statistical lock. It is also possible to take unnecessarily a large number of items in the sample (it will take more time, money, static research will be unnecessarily prolonged ...) in the case where the same effect, that is, the standard error value, will be achieved with a smaller number of elements in the sample. Therefore, in the statistical theory there are numerous formulas for calculating the optimal number of elements in the sample. This is done in different cases, depending on the method of selection of elements in the sample (simple random sample, systematic, stratified, ...)
I agree with David's response and would like to add a couple of notes. While emphasizing on the sample size calculation in observational studies varies across disciplines, there is a tendency toward incorporating such information as a crucial element of study design. These adaptations are considered to account for the components that can affect study outcomes and implications, including type 1 and 2 errors, variance of data, and minimal clinically relevant difference.
I suggest you to consult the STROBE guidelines for further information Article The Strengthening the Reporting of Observational Studies in ...
Let’s put this in a different light. Sample size calculation is crucial to reduce the chances of false negative. If you will like to know if your country sells fast food, you will probably just need one fast food outlet and you can make that conclusion. But what if you want to say there is no fast food outlet? You will then need to sample representatively , maybe randomly across the country, and have to sample a good proportion of area (sample size), so that you can have a good confidence to say hey there’s no fast food outlet in this place.
Even if the observational study is non-comparative and does not involve hypothesis testing, there is still a value in doing a sample size estimation.
Each time we draw a subset (a sample) to make conclusions about the target population, there is always a chance that the drawn subset may not be truly representative of the target population.To make allowance for this, researchers need to specify (and state) what they are willing to accept in terms of 1) the level of doubt that if you repeated several times, the survey would get the same results (i.e. confidence level), and 2) the width (narrow vs wide) of the interval around which a particular mean would fall (i.e. confidence interval). The inherent variability between individuals in the population will also play a role in how big a sample you would need to take in order to get an accurate representation of the target population (Imagine sampling for hair color in Asia vs sampling for hair color in cosmopolitan cities like Paris or New York; you would need to sample a larger number of people in the latter where there is a wide variability of hair color in order to get an accurate representation of the population).
It would be good to review the Central Limit Theorem which forms the basis for sample size computations.
To the best of my experience researchers (especially in medical universities) ask statistician to reduce the sample size due to non availability of patients regarding specific condition.