02 February 2020 4 6K Report

Dear researchers,

According to the central limit theorem for a large number of random samples as X1, X2, ..., Xn, with expected value mu and standard deviation sigma, the following confidence interval is defined:

[mu - Z*sigma/sqrt(n), mu + Z*sigma/sqrt(n)]

As it can be seen in the interval above, as the number of samples n, increases the interval becomes narrower, and as a result so many samples stay out of the confidence interval, which is not good for our prediction of the problem. I have a question that put it in the following forms:

What is the main definition of a confidence interval? Is it a confidence interval for the average of Xi or Xi itself? In other words, should all Xi(s) lie within the interval with the probability of say 95%? or this is not necessary?

I have already attached a MATLAB code for randn function, one can see that as n increases the CI becomes narrower and so many Xi(s) stay out of the CI, however, for lower value of n, the results seem logical.

thanks for your responses.

Similar questions and discussions