i am assessing reliability and construct validity of OSAUS i.e an assessment tool for ultrasound operator competence. i ll compare the mean scores of experts, intermediates and novices.
Sample sizes are judged based on the quality of the resulting estimates. For example, if a proportion is being estimated, one may wish to have the 95% confidence interval be less than 0.06 units wide. Alternatively, sample size may be assessed based on the power of a hypothesis test.
A sample size is a part of the population chosen for a survey or experiment. For example, you might take a survey of dog owner’s brand preferences. You won’t want to survey all the millions of dog owners in the country (either because it’s too expensive or time consuming), so you take a sample size. That may be several thousand owners. The sample size is a representation of all dog owner’s brand preferences. If you choose your sample wisely, it will be a good representation.
When Error can Creep in?
When you only survey a small sample of the population, uncertainty creeps in to your statistics. If you can only survey a certain percentage of the true population, you can never be 100% sure that your statistics are a complete and accurate representation of the population. This uncertainty is called sampling error and is usually measured by a confidence interval. For example, you might state that your results are at a 90% confidence level. That means if you were to repeat your survey over and over, 90% of the time your would get the same results.
For determining the sample size with population greater than 10000 (with no upper limit) and 95% level of certainty, a formula is used according to suggestion of (Saunders et al., 2011). The formula is:
n = p% x q% x (z/ (e %)) 2
n = minimum sample size, p% = proportion belonging to the specified category,
q% = proportion not belonging to the specified category, z = z value (z = 1.96 for 95% level of certainty), e = margin of error (corresponding to z-value).
You can perform a pilot study by taking 10-12% as suggested by
@Malini Ganapathy
and use the results of pilot study in the formula recommnded by (Saunders et.al 2011)
The sample size is chosen to maximise the chance of uncovering a specific mean difference, which is also statistically significant
This formula can be used when you know and want to determine the sample size necessary to establish, with a confidence of , the mean value to within . You can still use this formula if you don't know your population standard deviation and you have a small sample size. (Webster 1985)
There are three possible answers to your question.
1. For instruments in the developmental phase, how many respondents would be required in order for developers to detect structural problems, misunderstandings, and related problems? That is, during initial field test. It doesn't take too many cases, but a lot of folks recommend between 100-200 cases.
2. To determine whether a statistically significant relationship between scores on variables (or a statistically significant difference between groups/populations) exists in scores on one or more measures. In this context, statistical power is the primary concern, Jacob Cohen's Statistical Power Analysis for the Behavioral Sciences (1988) is an excellent resource. The free G*Power program (http://www.gpower.hhu.de/en.html) will also help you compute the requisite sample size needed (given the constraints--desired power, desired significance level, minimal degree of difference/relationship deemed worthy of detection, number of IVs and DVs--you choose).
3. To determine a population's value on some characteristic, what sample size would be required to achieve a desired level of precision in estimating that value (parameter)? This is the classic question associated with survey sampling. Naveed Ahmad's answer gives you one formula applicable for this. There are others, depending on the sampling method used, whether the characteristic is dichotomous or continuous, and whether the population is finite or not.