The 100(1-alpha)% confidence interval for the mean is given by xbar(+/-)(zalpha/2)*sigma/√n. It follows that the error margin e=(zalpha/2)*sigma/√n. Therefore n > =[( zalpha/2)*sigma]^2/e*e. This means that the sample size n is a function of the error margin e, the level, alpha, of significance and the data variance sigma. Therefore the less alpha is, the more n should be. The less the error e, the more n should be. The more sigma, the more n should be.