This is in context of a community survey to detect differences in risk factors between two groups who both suffered from an infectious disease but only one group exhibited a specific complication.
You need to be confident that the sample is representative and large enough so that any tendancies detected in the sample may be generalised to the wider population. There are many ways to calculate sample size (many of which are based on Power), but if you are looking for a simple calculation then the "margin of error" might be a good place to start - this is determined by the formulate 1/rootN where N is the number in your sample, so far a sample of 100 persons, that gives a 10% margin of error, in this case to get the margin of error to 5% or below you will need a sample of 500 persons. As an even simpler rule of thumb no survey should be less than 300 persons...
I suggest you try this "Sample size and power calculator". It lets you choose which type of measure you are analysing, and several statistical parameters. It has been developed by a research group at my Institution and works very well.
You can use the online version here (click on the upper right tab for english):
Smita, Generally speaking, the advice so far is helpful. The most important thing is that you have to identify any questions you may have in advance. Often this isn't thought out. For example, might you want to know whether men and women differ in their opinion on something? Maybe you want to know if young people, let's say 40. If you do care about such differences, how big a difference might be important? You can Google "effect size" to see what small, medium, and large are. These are the sorts of things people often try to do with a survey. It's particularly important if you might want to know about groups that make up a small part of the population you are surveying. That might require you to get a larger sample in order to find those differences, or to stratify to make those groups larger in the sample than they would be if you just sampled at random. One thing that is really great is that you can find a lot of great resources, including calculators you can use, just by Googling topics such as "statistical power" or "sample size." When you know what comparisons you want to make, how big those groups are, and how big a difference (effect) matters to you, then you can figure out how big a sample you need. If you don't have questions like that, it is then just a simple matter of how precise you'd like your estimates to be.
But to state it again, the most important thing you can do is to figure out your questions before you start. The second most important thing you can do is find out what others have done with similar surveys. Bob
All comments are correct. Here's the summary, in a nutshell.
Type I error (α): Prob(reject Ho | Ho is true) (e.g. α = 0.05)
Type II error (β): Prob(fail to reject Ho | Ho is false)
Power: 1 - β
Per-group N: the minimum per-group sample size required to
detect a difference δ between the two groups
Effect size (δ) the magnitude of the difference between two
groups
Variance (σ2): The variance of the outcome measure being
tested
So: in order to set your Type I and Type II errors, you need to define Ho. The he research question you want to test is the place to start.
What's the effect size you expect to see in your data? This is a difference between means if continuous, or a difference between proportions if binary. Be as realistic here as you can. Likewise, be realistic about the variance, based on prior data, published findings comparable to your study population, etc.
Per-group N and effect sizes are often calculated for several choices of α and β. It's a matter of plugging in values and solving for unknowns in