In most cases, the confidence level is taken as 95%? How do you get this value? What is the practical significance of this value?
I suggest reading some books on statistics. It is a quite fundamental question. I will anyway give short answers to your questions, though...
What do you mean by confidence interval in statistical analysis?
It is an interval estimate for a parameter value. It is construced in a way so that, in the long run, a given proportion of these intervals will include the unknown true parameter value. The proportion is given by the "level of confidence". For instance, you can expect that at least 90% of (a large series of) 90% confidence intervals will include the unknown true values of the parameters.
In most cases, the confidence level is taken as 95%?
Yes.
How do you get this value?
This depends on the parameter and the error model. Statistic software calculate such intervals, so a user actually doesn't need to know the technical details. A frequent problem is to give the CI for a mean value (xbar). This is calulated as xbar plusminus standarderror * t-quantile. The t-quantile is taken to get the desired confidence level.
What is the practical significance of this value?
It gives you an impression of the precision of the parameter estimate. Values spanned by this interval are seen as "not too unexpected to be true". CI's are actually a frequentist tool, but a further interpretation is Bayesian: given a flat prior, the CI is identical to the maximum a posteriori interval ("credible interval"). Here, the interpretation is inverse. Instead of saying that at least a given proportion of such intervals will include the true value, the Bayesian interpretation is that this particular interval includes the true value with a given probability.
Looking at mean values, giving the CI is not in principle different to giving the standard errors (both are measures of precision), but the CI is much easier and clearer to interpret than the standard errors, since the directly give you a range of "not too unreasonable values" of the estimate. Further, the 95%-CIs include the information about the null hypothesis test on the 5% level (significance = 1-confidence). The null hypothesis can be rejected at the 5% level if the 95%CI does not include the null value.
I suggest reading some books on statistics. It is a quite fundamental question. I will anyway give short answers to your questions, though...
What do you mean by confidence interval in statistical analysis?
It is an interval estimate for a parameter value. It is construced in a way so that, in the long run, a given proportion of these intervals will include the unknown true parameter value. The proportion is given by the "level of confidence". For instance, you can expect that at least 90% of (a large series of) 90% confidence intervals will include the unknown true values of the parameters.
In most cases, the confidence level is taken as 95%?
Yes.
How do you get this value?
This depends on the parameter and the error model. Statistic software calculate such intervals, so a user actually doesn't need to know the technical details. A frequent problem is to give the CI for a mean value (xbar). This is calulated as xbar plusminus standarderror * t-quantile. The t-quantile is taken to get the desired confidence level.
What is the practical significance of this value?
It gives you an impression of the precision of the parameter estimate. Values spanned by this interval are seen as "not too unexpected to be true". CI's are actually a frequentist tool, but a further interpretation is Bayesian: given a flat prior, the CI is identical to the maximum a posteriori interval ("credible interval"). Here, the interpretation is inverse. Instead of saying that at least a given proportion of such intervals will include the true value, the Bayesian interpretation is that this particular interval includes the true value with a given probability.
Looking at mean values, giving the CI is not in principle different to giving the standard errors (both are measures of precision), but the CI is much easier and clearer to interpret than the standard errors, since the directly give you a range of "not too unreasonable values" of the estimate. Further, the 95%-CIs include the information about the null hypothesis test on the 5% level (significance = 1-confidence). The null hypothesis can be rejected at the 5% level if the 95%CI does not include the null value.
For a more comprehensive (and complicated) answer to your question look at the paper:
"Confidence Distribution, the Frequentist Distribution Estimator of a Parameter: A Review" by Min-ge Xie and Kesar Singh.
http://www.stat.rutgers.edu/home/mxie/RCPapers/insr.12000.pdf
That ist every-days task of a scientist. Read any statistics textbook or look it up in Wikipedia.
A simple and useful book on this topic is: "Statistics with Confidence: Confidence intervals and statistical guidelines", 2nd Edition Douglas Altman (Editor), David Machin (Editor), Trevor Bryant (Editor), Stephen Gardner (Editor) February 2000, BMJ Books. It has also a disk with practical instructions for CI.
Complement to comprehensive Jochen's answer:
1) the 95 % level is purely conventionnal, you can choose other levels according to the application you have (and other conventions in different research and application fields)
2) The link with the null hypothesis works well only if you construct the confidence interval under the nul hypothesis. It does not matter for means, but it can lead to different results when used for proportions (test: you use the theoretical proportion value under the nul hypothesis to build the test statistic; estimation: you use the estimated proportion value to build the confidence interval. Hence the SD estimation is different in the two cases). Tests and confidence intevals do no exactly answer the same answer...
@Viktor, thank you for this review. It took me years to gather essentially the same insights... There was one very nice sentence, actually saying that
frequentists are looking for estimates *of* a parameter, whereas
Bayesians are looking for estimates *for* a parameter.
So tiny are the differences :)
Let's say you have estimated some quantity, e.g., mean of the population by analyzing some sample . It is only estimation, since, of course, the sample is usually much smaller than the entire population. Thus when you present your data it makes sense not only to provide a single number - estimated mean value - but also to estimate the uncertainty of this result. Confidence interval characterizes this uncertainty. It is defined as a range of numbers, within the sought quantity (mean value of the population) can be found with given probability (90%, 95%, or some other)
Just a Comment on this topic. Sometimes my students ask -and if not, I ask them: why do not calculate a 100% confidence interval? And the answer is illuminating, "because then you will have an interval covering the complete parameter's space", I.e., the complete real line from minus infinite to plus infinite in the case of the mean, or the interval from zero to one for a probability... Confidence intervals are better than point estimates just because give you a better idea of the precision of your estimate, indeed they are equivalent to an hypothesis test around the estimated point. There a lot to be said, for example when your estimator is a maximum likelihood estimator, the confidence interval give you an idea of the curvature of the parameter space around the estimated point... I think is good to ask and respond this kind of innocent questions, because it force us to go back to basics...
Hi Anvita, looking at CI's may be an every-day and basic task of any researcher, as some have remarked, but one that is misunderstood by many:
Belia S, Fidler F, Williams J, et al. (2005) Researchers Misunderstand Confidence Intervals and Standard Error Bars. Psychological Methods 10: 389-396.
Confidence limit and confidence intervals are not the same thing.
I summarize this fact in my paper about quality control:
https://www.researchgate.net/publication/233878520_Quality_control_in_bio-monitoring_networks_Spanish_Aerobiology_Network
Article Quality control in bio-monitoring networks, Spanish Aerobiol...
A confidence interval for a fixed parameter θ represents a plausible range of values for the parameter that is consistent with the observed data. Specifically, for a single parameter θ, the interval (L,U) is a 100(1−α)% confidence interval for θ if Pr(L≤θ ≤U)=1−α. The quantity 1−α is called the confidence level, and is equal to the probability that the random interval (L,U), contains the fixed parameter θ. The con- fidence limits L and U are constructed from the observed data in such a way that in infinite replica- tions of the study, the proportion of such intervals that contain the parameter θ, or the coverage probability, is 1 − α.
I suggest you obtain a stat program and a book with examples, etc. to help you learn stats better.
Conceptually, for a CI, think of intelligence testing. CI's are used to convey the reality that, if given over and over again, anyone would score close to their original score a high percentage of the time. 95%, 99%, or even 90% CI's are used; for example, if one has a Full Scale IQ score of 121, that person would also have a CI of about 115 to 128 or so with the interpretation that, on any given day, 95 (or 90 or 99) percent of the time, their IQ score will fall between these two numbers. That addresses the reality of variability regardless of how stringent we believe we are being.
Consider, for example, the space 15 of the amplitude samples from a normal population with mean 175 and variance 44. Imagine observe succession in the samples in the sample space. To each sample observed correspond to a numerical range obtained by adding and subtracting the average of the sample the margin of error....... see attach
I attach a very simple paper which describes the basics of confidence interval and its use in research.
Ana-Maria, just a note:
In the attached paper you wrote that "The P value describes probability that the observed phenomenon (deviation) occurred by chance". That's wrong. This is a common misconception. The correct meaning is that the P value describes the probability to observe the obtained or a more extreme statistic in a world where the tested effect does not exist (in technical language: "given H0").
The expression that something is "happening by chance" is just a way to say that we do not have any explanation for the observation. If we had an explanation then it would not be described as have happened "by chance". It is obvious that P is not a probability for having no explanation for an observation.
Don't use "confidence intervals", ask yourself: why they say a 95% confidence interval instead of 95% probability interval? I mean, I know the axioms of PROBABILITY, but there are no axioms of CONFIDENCE. They use the word "confidence" because they use of "probability interval" is not justified under the classical statistics approach. What you want is an interval such that there is a high probability that the parameter belongs to it, and that implies a probabilistic modeling of your uncertainty about the parameter, that is to model the parameter as a random variable. This can be achieved under a bayesian approach, and you will always get an admissible interval, in contrast with the classical approach which may lead to inadmissible intervals (intervals outside the set of admissible values for the paramer), See The Bayesian Chioce by Christian P. Robert (2007), Springer.
Arturo, please read the paper Viktor linked yesterday:
http://www.stat.rutgers.edu/home/mxie/RCPapers/insr.12000.pdf
Jochen, you are absolutely right, Thanks for clarification. What I wanted to say is that P value denotes the probability of getting the result greater than the value you obtained, whereas there is no effect in the population. In other words, the lower the P value, the greater is the probability that you have observed the real effect in your study.
Sorry to correct you again, Ana-Maria...
"In other words, the lower the P value, the greater is the probability that you have observed the real effect in your study."
This wrong view directly follows from the previous misconception...
The P value does NOT tell us the probability that an effect exists! This can only be infered using Bayesian statistics. The P-value is defined as
P = (data|H0)
and you are talking about P(H0|data) or P(HA|data). Note that P(H0|data) ≠ P(data|H0) and that you need Bayes theorem to calculate
P(H0|data) = P(data|H0) * P(H0)/P(data)
where P(H0) is an a-priori probability of H0 and P(data) is the total probability of the data.
The P value is a random variable and this alone does not in any way help you to decide whethe or not there is an effect. Rejecting H0 whenever P
I neglected to tell you that practical significance is a simple formula that produces a statistic called eta-squared.
OK Jochen, I will check the paper, thanks. But could you tell me what would it be a 95% confidence interval for the parameter p of a univariate Bernoulli distribution with the following observed sample: 0,0,0?
@Arturo:
The likelihood-function for the binomial response is the beta-distribution (as the t-distribution for a "normal" response). The CI.95 for p given your data is 0...0.71.
@Monther:
This is a kind of a bootstrap procedure. IMHO this is fine. You get a bootstrap distribution from which you can directly determine the position of the maximum density and the central range covering 95% of all the bootstap values (the limits are the 0.025- and the 0.975-quantile of the bootstrap distribution). This would correspond to the maximum likelihood estimate (what is for "normal" data the average) and the CI.95.
You can give +/-SD instead of the CI, this is not wrong, but is less usefull and sometimes even misleading, for instance when the distribution is skewed; then giving a symmetric interval is just inappropriate.
Jochen: OK, but the maximum likelihood estimator, which in this case is equal to the uniformly minimum variance unbiased estimator of p is zero, a value that does not belong to the interval and is not an admissible parameter value! What a drawback :-( I prefer the bayesian 95% probability interval ]0,0.444[ with point estimator equal to 0.125
Dear Anvita,
Most of the users of statistical tools never really think twice before making a statistical conclusion based on data. Your question is indeed a very nice one!
Suppose, you have drawn a sample of observations. Suppose, you are interested to know whether your sample may have come from a normally distributed population with the numerical value of the location parameter equal to 6.0. In this case, you would use the t-test to make a conclusion.
Imagine a situation that your sample mean is actually 6.0. Would you really need any statistical procedure to make a conclusion in this case? You can be 100% certain about the truth of your hypothesis because the difference between the sample mean and the numerical value of the location parameter is zero. This 100% (or 1.0) is the probability that there is no difference between the hypothesized value and the observed value. In other words, it is nothing but the probability that the sample mean is either more than 6.0 or less than 6.0.
Suppose, your sample mean is 5.8. In this case, the difference (= 0.2) may or may not be statistically significant depending on the size of your sample. Even if your sample mean is 6.2, the difference would be the same.
Observe that in this case, the level of truth of your hypothesis is not equal to 1.0; it will be less than that. It is indeed the probability that the value of the sample mean is either more than 6.2 or less than 5.8.
In this way, when the difference is large enough to such an extent that you are not even 5% sure about the truth of your hypothesis, you would conclude that you would reject the hypothesis at that probability level of significance. In fact, 5% is a very small value. If you are not even 5% sure that a statement is true, then it is better to reject it. That is the point.
Now, coming back to your question, the two values defining the interval, around 6.0, beyond which you are not even 5% sure about your hypothesis, is known as the 95% confidence interval.
Why 95%? Well, you can certainly make it 99%. But in that case, you are compromising to reduce your level of assurance to just 1%.
@Arturo, thank you, I see your point. And I found a mistake in my post.
I used R and binom.test() to calculate the CI. This function employs the beta distribution to get the limits, but gives the Clopper-Pearson interval. There are in fact better ways.
But I can't figure out how you calculated the interval and the point estimate. According to http://arxiv.org/pdf/1012.0566v3.pdf for example, I get an CI from 0.005 to 0.521, the maximum likelihood value still at p=0, now clearly outside of the interval. Using Be(2,2) as prior (instead of Be(1,1) - as i suppose the authors are using) the interval is 0.034...0.579 with the maximum likelihood at 0.167.
@Jochen I used a Beta(1/2,1/2) as (Jeffrey's) non informative prior, and so the posterior is Beta(1/2,7/2), and therefore de minimum length 95% probability interval is ]0,0.444[ with point estimator p=0.125 ;-)
Thanks, now I see :) I was looking for the mode of the posterior, whereas you gave the mean (expected value). Thank you again for clarification. Interestingly, the mode obtained from "your" posterior by (a* - 1)/(a* + b* - 2) is (0.5-1)/(0.5+3.5 - 2) = -0.25 is negative, what I find strange.
@Anvita, do the experiment many-many times, practically N_total-->infinity and every time compute the 95% CI for the parameter you want. After finishing all the experiments, count how many times (N_yes) the true value is inside your computed 95% CI' s. Finally compute the ratio N_yes/N_total and you will find that it will tend to 0.95 as N_total-->infinity. That' all.
@Jochen
The formula you attempted to use to calculate the mode of the posterior is just valid whenever both parameters are greater than 1, which clearly is not the case since a*=1/2
To keep it seemingly simple, though: CI denotes how confident you are in your estimation / guesstimation of your inferred value.
95%CI is what we generally use in medicine / clinical biostatistics hypothesis-testing, since in most of our studies a P-value of 0.05 is considered significant enough to negate our initial null hypothesis.
You can, however, use 99% or 70% CI depending on how you wish to report your data at hand.
► http://bit.ly/17pBSYV
hth.
Confidence interval is a method for estimation. It is associated with a confidence level in percent (like 95%). It gives a range of possible values for the population value to be estimated.
Here is an interpretation of a 95% confidence interval:
if we repeatedly take a sample of the same size, obtain each time an estimate and the corresponding 95% confidence interval, we EXPECT 95 of these many intervals to contain the true population value. So 5% would not contain the true population value. We don't know before hand which group (contain OR not contain) the confidence interval we get belong to. We are ONLY confident 95% that the population value is contained in our interval (only one of many).
Looking at the lower limit and the upper limit in the interval can help also make a decision about the true population value
I was told in a lecture that confidence intervals are one thing that Bayesian and Frequentist statisticians argue over a lot. Assume a 95% confidence interval.
The Frequentist view (as pointed out here by Jochen Wilhelm, Yolande Tra, and others) is that for 95% of the attempts to generate a confidence interval, the true value will lie within that range. In other words, the confidence intervals change, but the value stays the same.
The Bayesian view of a confidence interval is that in 95% of the attempts to calculate a value, it will lie within the confidence interval range. In other words, the confidence interval stays the same, but the value changes.
@David: I can accept your description of the frequentist's view, but to my opinion your description of the Bayesian view is a little distored. Let me try to put it in different words:
The frequentist sees a true, real-existing, constant, but unknown parameter. He assigns NO probability to this parameter; it is just unknown. He will never get any direct hint from any data about the value of this parameter. However, he can construct CIs so that a given percentage of such intervals will include this unknown value in the long run.
The Bayesian sees a model with an unknown parameter value. He does assign a probability to the parameter values: there are more likely and less likely values. The probability distribution is adjusted in light of the available data. If in the absence of any data each theoretically possible parameter value is considered equally likely ("flat prior"), then the CI (obtained from the data) is the central range of the most likely values for this parameter.
Note that the CI changes for both, frequentits and Bayesians. With replicate experiments about the same "question", the frequentists get several different CIs and just expect 95% of them to contain the "true", objectively existing value, whereas the Bayesians will update their "best guess" (sometimes called "belief") with each new set of data (this process is called "learning"). NB: if the frequentist uses all this data together he will get a single CI that again has the same limits as the final "posterior interval" obtained from the Bayesian (-> the math behind all this is the same, only the philosophy is different; the math is different only if the Bayesian uses some "special", non-flat prior to include relevant prior knowledge. So to say is the frequentist's math just a special case -or better a part of- the Bayesian's math).
@Fernando: If you have a histogram of parameter values, e.g. produced by bootstrapping, you get the limits of the CI.95 simply as the limits of the central range containing 95% of the values. So the lower limit is the 2.5% percentile, the upper limit is the 97.5% percentile of the data shown in your histogram. You can further think of a good measure of the central tendency in your data. This is simple when you have a symmetric distribution, since mean and median and mode are the same. In skewed distributions, the mean might by relatively "atypical" for most of the data. I personally would favor the mode.if I have to present a "typical" or "central" value.
Confidence interval can be referred to as a range of values, calculated from the sample observations, that is believed, with a particular probability, to contain the true parameter value. A 95% confidence interval, for example, implies that were the estimation process repeated again and again, then 95% of the calculated intervals would be expected to contain the true parameter value.
It should be noted that the stated probability level in this case refers to properties of the interval and not to the parameter itself which is not considered a random variable
Azubike, this part is wrong and misleading: "that is believed, with a particular probability, to contain the true parameter value.". This is the crux with the frequentist view: A single interval does not tell us anything. It is just the basis for a decision that might be right or wrong. Nothing can be said about the probability whith which a *particular* decision is right or wrong! Only in the long run a maximum rate of wrong decitions will not be exceeded when the decision rule is kept.
Jochen, this is not only a "crux with the frequentist view", but with decisions in general. As far as I know, always "It is just the basis for a decision that might be right or wrong. Nothing can be said about the probability with which a *particular* decision is right or wrong!". No certainty can ever be claimed: this is the crux with Bayesian view about choosing the prior and the posterior probabilities being a guess according to a hypothesis and a subjective rule. Isn't?
Franco, I never claimed that certainty is available with noisy data. I wanted to point out that the frequentist has to stay at that point where he can make a decision with defined long-run error-rates but without being able to claim anything about the chance that a particular decision is right or wrong. The Bayesian, in contrast, can derive a posterior density, what is directly represents his knowledge or conviction about the topic. In the first place it is unimporant how the prior was defined - if it is just a very subjective point-of-view, or based on some prior knowledge, or being "uninformative" in some way. It represents a conviction, and so does the posterior. If is is measurable at all, the decisions based on posterior densities should have good frequentistic properties - sure. If this was not the case it would indicate that the prior was not chosen reasonably. But all this is *not* the central point of my argument. However good or bad the prior is chosen: the Bayesian can make up his mind to the question what a "reasonable" range for an estimate is, whereas the frequentists can't do this at all (although many do think that they could...).
.
an interesting historical note about the origin of this ubiquitous 0.95 value
http://www.jerrydallal.com/LHSP/p05.htm
(spoiler : that's all Sir Ronald's fault ... as always !)
.
This implies a measure that specify the range of values within which a given percentage of the sample means falls. For instance, the conventional 95% value in most of our statistical results indicates that we are less than 5% of 100% sure of our findings (variables and constructs interactions) on a particular issue, leaving a 5% chance that we might be wrong.
@Adeboye: This is wrong. Please read the posts above, especially my answer to Azubike. Or read the definition of confidence intervals (e.g. in http://en.wikipedia.org/wiki/Confidence_interval): "A confidence interval does *not* predict that the true value of the parameter has a particular probability of being in the confidence interval given the data actually obtained [...]"
Jochen! If you think that is the case, could you please proceed to provide answer to the second phase of the question. It think with this you will be able to provide a clear picture of how confidence intervals could be interpreted.
Jochen! If you think that is the case, could you please proceed to provide answer to the second phase of the question. I think with this you will be able to provide a clear picture of how confidence intervals could be interpreted.
Adeboye, do you mean the part "What is the practical significance of this value?" ?
Short answer: (1-a)-CIs quickly show you which hypotheses could be rejected while keeping the maximum type-I error rate at a*100%. Not less than this, and not more either.
See this EXCEL sheet. It may help understanding,
https://www.researchgate.net/publication/258120613_Estimation_of_Population_Variance_(CI)?ev=prf_pub
Data Estimation of Population Variance (CI)
Jocken is correct. A (not "the") confidence interval is an interval of a family of intervals that for which 95% of the intervals will have the population parameter. If you compute 1 CI it may include (than the confidence is 100%) or not (than the confidence is 0%) the true population parameter. It depends on several things one being the representation of your sample in its population.
The practical significance is given by the error-of-estimation (1/2 (UL-LL)) also called the margin of error...
If you wanna do a hyothesis test for a given parameter value, just look to the CI and see if the parameter belongs to the CI. It it does, than you should not reject the H0:parameter=k; if it does not, than its safe to reject the null hypothesis for a alfa x 100% type I error rate.
Thank you for clarifying some aspects, Fausto.
1) I did not mean to say that the confidence level is selected upon the t-quantile. It was really meant the other way around: you do have a desired level of confidence and for this you have to select the appropriate quantile of the t-distribution.
1 MOREOVER) The frequentistic properties are only assured for normal distributed data/errors. The (non-frequentistic) inferential implications are valid independent of the actual distribution. However, when I know that the data has some specific frequency distribution, I do have particular knowledge of the problem what then is not correctly represented by using the normal error model.
2) I don't get your point here. H0 can be rejected solely based on p-values. By design, (1-alpha)-CIs are intervals of hypotheses that could not be rejected at alpha, so their additional value is to see which other hypotheses (except H0) could be rejected, too. But this is rarely a practical issue, people don't do it. So only the estimate of a precision is a left-over possible practical implication. That's not misleading, that's a simply as what it is used. It is not a frequentistic property, and therefore you are right in stating that this is kind of fishy to (mis-)use the frequentistically defined CI to inferentially estimate a precision.
2 MOREOVER) You are a little harsh in claiming that this is misleading. Both measures are used as estimators of precision. Surely, CIs are to be much preferred, since the actual meaning of SE depends on the sample size.
Btw, have you ever read how the normal distribution for *errors* is derived? I bet you think as limitting case of the binomial for n->Inf. And I further assume that you think probability *is* a limitting frequency, and (cumulative relative) frequency distributions are natural estimates of probability distributions. If so, I'd consider that as misleading.
Confidence interval - a range of values of a sample statistic that is likely (at a given level of probability, called a confidence level) to contain a population parameter. The interval that will include the population parameter a certain percentage (confidence level) of the time in the long run (over repeated sampling). In other words, a range of values with a known probability of including or capturing the true population value.
It is common to say that one can be 95% confident that the confidence interval contains the true value. Rather, it is correct to say: where one to take an infinite numbers of samples of the same size, on average 95% of them would produce confidence intervals containiong the true population values.
Cimpoies, there is an often made misinterpretation in your answer: "In other words, a range of values with a known probability of including or capturing the true population value." - that is not correct since only the data is considered. In a particular case, the definition of a CI does *not* allow to assign a probability for a correct or wrong decision. This is the consequence of the strange frequentistic definition. Only because 95% of such intervals will include the pop.mean doesn't allow to say that one particular CI has a 95% probability to include this particular pop.mean.
Fausto, sure, the frequentistic properties will only be met mathematically exact if and only if the (limiting) relative frequency distribution is identical to the assuemd probability distribution for modelling the errors. In this respect, not a single *real* (i.e. empirical) CI or p-value is correct. They are all wrong. More or less. The (empirically, practically) sensible question is actually not what is right or wrong (we know the answer: they are all wrong), but rather: how wrong is still tolerable? If you *know* already that the assumed error model is wrong, there are two choices: (1) you have a bette one. Then use that one. (2) you don't have any better one. Then take the one which might fit best and which has the highest entropy, and interpret with care.
Again, again, and again: I did not state that a CI is an interval for MU. It is a sample interval (around xbar). It has nothing to do with MU, except, by design, a defined proportion of such intervals will include MU in the long run. I don't understand what you always have to nag about this.
A CI can become a statement about MU if some expectation about MU is considered. From a Bayesian point of view, a CI is similar to a credible interval obtained from a flat prior.
By the way, btw is an abbreviation of "by the way".
And, btw, I find it a little strange that you adress critique about my posts not to me (to correct me, with arguments, so that I might understand my misconceptions) but (seemingly only) to other readers. I don't find it very kind.
Finally, I hardly understand your posts. I probably just misunderstand you.
PS: Interestingly, "scientific behaviour" is actually and practically just *not* based on Neymanian interpretations of significances and confidence intervals.
[---Edit---]
PS: Have you read that I wrote: "This depends on the parameter and the error model." and "A frequent problem is to give the CI for a mean value (xbar)." ?
But after re-reading my own answer, I found something I would correct, or write slighly different now:
"the Bayesian interpretation is that this particular interval includes the true value with a given probability." should actually read "the Bayesian interpretation is that this particular interval is an interval of the most credible parameter estimates." (beacause seeking a "true" value is not the aim here; instead possible estimates are rated with regard to their credibility, given the data).
Unfortunately this is still not easy to understand, especially when the concepts of probability, likelihood and credibility are not very clear.
It is an interesting debate about a key point. Perhaps it may be explained better with an empirical example that gives the numerical average values of some partition, like the decils table of a Lorenz curve data set. Then Jochen and Fausto would give their Lower Limit and Upper Limit around each partition or quantil average, plus their assumed model and prior premises. My impression is that statistical theory is in a crisis of theoretical fundamentals due to the high influence of the normal distributions in the discipline so non-parametric methods are sane reactions to fix it. All this creates confusion among students that declare that they do not understand too much the subject and feel forced to memorize it if they want to get good results as students. It is a two century old crisis that requires assuming clear and coherent positions, and people that think with their own minds, not referencing standard text books and central limit theorem. I simpathize more with Fausto's position in spite that I did not understand all his arguments.
The confidence intervals estimated by classical statistical means often depend on the assumption of a certain distribution for the parameter. This is sometimes a problem and can be by-passed by Resampling techniques, like boot-strap methods. The advantage of thes methods rely in their flexibility, and - given the computer capacities now available - complex models can be handled in quite a straightforward way. We did so in the inversion of seismic velocity models. One may gice a look to textbooks like Hastie, Tibshirani & Friedman, "Elements of Statistical Learning", besides, plenty of material concerning resampling techniques is found in the internet.
> As I promised, I uploaded today December 20th 2013 the
> document "Confidence Intervals versus Credibility Intervals:
> Classic vs Bayesian Statistics, fist part"
Sorry Fausto. I tried to read that document, but there is so much vitriol, fluff, and unnecessary font changes that I have trouble seeing the point of the document.
If you want myself (or other people) to read and understand your discussion on confidence intervals, please keep to a consistent font, be as concise as possible on the points you wish to discuss, and reduce the attacks on others to a nonexistent minimum. You need to treat your readers as intelligent, competent people who want to improve their knowledge of the subject under discussion.
Unfortunately, you spent the first 8 or so pages talking about how incompetent everyone else is and splitting off on many different seemingly-irrelevant tangents on the way. Here's one of the more well-written segments of that introductory section:
--
Once upon a time A. Einstein said "Surely there are two things infinite in the world: the Universe and the Stupidity of people. But I have some doubt that Universe is infinite". Let's hope that Einstein was wrong, this time. Anyway, before him, Galileo Galilei had said [in the Saggiatore] something similar "Infinite is the mob of fools ".
--
Here is a less well-written segment that is repeated a few times once the discussion on confidence intervals begins. There are more text font/size changes in this segment than what can be represented in a ResearchGate Comment (I count 11 font changes, including lowercase / allcaps switching):
--
When the SAMPLE is INCOMPLETE the previous formulae are NO LONGER VALID That’s why I gave to my students that exercise of the THREE SUPER_incompetent professors, highly rated in the so called !!!!!!
--
When someone makes almost continuous attacks on others, doesn't use proper English grammar in English communications, and goes completely off topic, I have difficulty believing in the credibility of what they say.
I am always following with interest this tread, as a practitioner in a field –metrology- where evaluation of uncertainty is the basic issue, because "confidence" or "belief" are the natural consequence of statistical thinking. So natural that it is often forgotten the fact that an 'uncertain number' is NOT composed of two elements –value and uncertainty- but of three, since the latter requires a decision, the confidence or belief 'level' or 'degree'.
However, I am a little bored by the continuing dispute between frequentists and Bayesians in probabilistic statistics. I think that both methods have pros and cons, but neither can be taken as the 'terminal' solution for everything.
This exclusive and assertive attitude extends also, in my opinion, for too many statisticians to the opinion that one can get, from the use of any statistical mean, MORE than a mere 'confidence' or 'belief', a kind of certainty. I understand so much well the issue that conclusions derive from assumptions, that I have had, in many occasions of my professional life, to verify that a method was used instead beyond the assumptions: obviously it can be done, but the results miss their 'optimal' status –'optimal' being another fuzzy term if one does not associate the contingent meaning.
I perfectly understand that statistical (not necessarily probabilistic) reasoning is the only viable method in science, outside mathematics and including 'soft' science like economics, and that the outcomes of it are needed for taking decisions with a quantified 'risk'. However, especially to students and to less educated people, it should be frequently recalled in all-capital letters the difference between a probability and the 'truth'.
I think that the original meaning and intention of the initial Anvita's conscious question has somewhat been lost in the course of this panel.
In this respect, the issue is the correct use of any method according to its assumptions, being clear that no method is good for 'all seasons' and no method provides THE solution, the decoration 'optimal' or 'best' being only relative to the assumptions, information available and to the limits of the inference.
In a scientific panel I would appreciate a larger degree of 'relativism'.
Statisticians use a confidence interval to express the degree of uncertainty associated with a sample statistic. A confidence interval is an interval estimate combined with a probability statement.
Confidence intervals are preferred to point estimates and to interval estimates, because only confidence intervals indicate
(a) the precision of the estimate and
(b) the uncertainty of the estimate.
The statistician might use a confidence level to describe uncertainty associated with the interval estimate. He/she might describe the interval estimate as a "95% confidence interval". This means that if one uses the same sampling method to select different samples and computed an interval estimate for each sample, then they would expect the true population parameter to fall within the interval estimates 95% of the time.
When you do a survey you can only do it ONCE coming up with a single estimate.
If you repeated the survey different estimates would result, However what is really required is the True population value not an estimate.
Using Sampling theory/Central limit thery the CI can be computed.
The confidence interval expresses the liklihood that the TRUE value is within the confidence interval. The 95% level expresses that it an standard percentage that the true or POPULATION value lies within this range.
There is a simple definition of confidence intervals (CIs): "The confidence interval is the range from the lowest to highest population estimates of the parameter which are NOT significantly different from the observed value." A second way of stating this is "The confidence intervals are the range of population values that are congruent with the observed value. 'Congruent' is defined as population values from which the observed value is not significantly different."
With this definition, it is easy to program the confidence intervals for any parameter being tested. Just program a loop that lowers the population parameter until the lowest non-significant value is found for the lowest CI, and then repeat the loop raising the population value until the highest non-significant value is found. This is a universal method. Elaborate formulas for each parameter are not needed. Any parameter with a significant test can have its confidence intervals estimated.
(I am drafting a publication to this point.)
@Fausto: Thanks for acknowledging me in your document. Unfortunately we seem to be at a disagreement regarding what I was trying to say, and I note that you have not improved your writing style in the second document in order to get your point across in a more appropriate fashion. In short, I have trouble working out what the substance of your arguments are because I am distracted by the extra (and in my opinion unnecessary) embellishments that you make in your documents.
I'm not saying that you're wrong, just that I have trouble understanding the points you're trying to make. My understanding (and presumably that of others) would be greatly improved by a consistent argument written in a consistent font with minimal tangential discussions.
Fausto, I read your new paper and understood your main points. In general I agree with your claim to be clear and honest about premises and mathematical handling. You have shown us that there are many irregularities, tricks and falsities in confident interval theories and practices. If this is what teachers transmit to students, then we are in serious methodological, educational and ethical problems. This requires to develop new ways to explain statistics without using apriori normal distributions, parameters and indicators as basic premises -they do not let data to speak by itself-. Thanks for your hard work about this key points and wish you to receive recognition by science "authorities", even if they arrive late. Your style may be quite different, you have the right to do so, and if it opens our minds, it is plainly justified to be somewhat harsh but honest and well intended. emilio
Answer available in page 2 of attached publication
Data Relevance of statistical signif icance in medical research I...
95% is arbitrary used by most statisticians
So you can want to be 80% confident, or 90% confident or 99% confident of your result and that its not got by chance.
At the first instance, one needs to understand the concept of standard error (SE) which is different from standard deviation (SD). SD is variability in a sample of observations and SE is sampling variability which assess how much uncertainty is conferred upon a sample mean (x bar) computed from a sample of observations. Then xbar +/- 1.96 SD is 95% range of observations a sample and x bar +/- 1.96 SE is the 95% confidence interval. The former is 95% range of observations and the later is about confidence that the population mean falls in that range. Basically sampling is done to estimate population mean as a single point or as an interval of confidence with known threshold of 95% or 99%. The threshold leaves only 5% to leave for chance i.e., 1 in to 20 chances of not being confident.
With respect to linear regression, the standard error of the estimate (STEYX) has proporties analogous to the standard deviation. You might say the standard error of estimate is to the best fit line what the standard deviation is to the mean of the frequency distribution. So if your best fit line to estimate Y from X is:
Y=a+bX,
then the parallel lines, Y=a+bX+STEYX and Y=a+bX-STEYX form the boundaries within which 68% of the data should appear. Likewise, using + or - 2 times STEYX with the best fit equation gives the boundaries for 95% confidence and so on. As you see from this simple example the confidence intervals on Y depend on the magnitude of X. Additional study is required to know when the linear regression should not be applied and to learn when other options are recommended.
I'm somewhat confused about the "parallel lines" delineating 68% or 95% intervals. As far as I remember in linear regression the confidence regions are pinched around the data centroid, and widen towards the extremes. This, by the way is one of the reasons why we should be extremely cautions with extrapolation. Anyway is there something I misunderstood in Kenneth's conrtibution ?
.95% of sample means lie within 1.96 standard errors of the population mean. Any single sample mean consequently has a 95% chance of being within 1.96 standard errors from the true population mean. Standard error of a sample is its standard deviation divided by the square root of the sample size, so the following should be clear:
a) Larger samples lead to lower standard errors;
b) Smaller samples lead to higher standard errors;
c) Larger standard deviations lead to larger standard errors;
d) Smaller standard deviations lead to lower standard errors
And, of course lower standard errors lead to tighter confidence intervals. If you want to use a confidence interval that is not 95%, here are the distances away from the mean for some other percentages of a normal distribution. The distances are in units of standard errors (measured from the sample).
Confidence level Standard Errors from mean
99% 2.58
95% 1.96
90% 1.64
80% 1.28
Regression Slope: Confidence Interval
This lesson describes how to construct a confidence interval around the slope of a regression line. We focus on the equation for simple linear regression, which is:
ŷ = b0 + b1x
where b0 is a constant, b1 is the slope (also called the regression coefficient), x is the value of the independent variable, and ŷ is the predicted value of the dependent variable.
Estimation Requirements
The approach described in this lesson is valid whenever the standard requirements for simple linear regression are met.
The dependent variable Y has a linear relationship to the independent variable X.
For each value of X, the probability distribution of Y has the same standard deviation σ.
For any given value of X,
• The Y values are independent.
• The Y values are roughly normally distributed (i.e., symmetric and unimodal). A little skewness is ok if the sample size is large.
Previously, we described how to verify that regression requirements are met.
The Variability of the Slope Estimate
To construct a confidence interval for the slope of the regression line, we need to know the standard error of the sampling distribution of the slope. Many statistical software packages and some graphing calculators provide the standard error of the slope as a regression analysis output. The table below shows hypothetical output for the following regression equation: y = 76 + 35x .
Predictor Coef SE Coef T P
Constant 76 30 2.53 0.01
X 35 20 1.75 0.04
In the output above, the standard error of the slope (shaded in gray) is equal to 20. In this example, the standard error is referred to as "SE Coeff". However, other software packages might use a different label for the standard error. It might be "StDev", "SE", "Std Dev", or something else.
If you need to calculate the standard error of the slope (SE) by hand, use the following formula:
SE = sb1 = sqrt [ Σ(yi - ŷi)2 / (n - 2) ] / sqrt [ Σ(xi - x)2 ]
where yi is the value of the dependent variable for observation i, ŷi is estimated value of the dependent variable for observation i, xi is the observed value of the independent variable for observation i, x is the mean of the independent variable, and n is the number of observations.
How to Find the Confidence Interval for the Slope of a Regression Line
Previously, we described how to construct confidence intervals. The confidence interval for the slope uses the same general approach. Note, however, that the critical value is based on a t score with n - 2 degrees of freedom.
Identify a sample statistic. The sample statistic is the regression slope b1 calculated from sample data. In the table above, the regression slope is 35.
Select a confidence level. The confidence level describes the uncertainty of a sampling method. Often, researchers choose 90%, 95%, or 99% confidence levels; but any percentage can be used.
Find the margin of error. Previously, we showed how to compute the margin of error, based on the critical value and standard error. When calculating the margin of error for a regression slope, use a t score for the critical value, with degrees of freedom (DF) equal to n - 2.
Specify the confidence interval. The range of the confidence interval is defined by the sample statistic + margin of error. And the uncertainty is denoted by the confidence level.
In the next section, we work through a problem that shows how to use this approach to construct a confidence interval for the slope of a regression line.
If you want a confidence interval related to a given prediction of Y then use the methodology shown at
http://people.stfx.ca/bliengme/ExcelTips/RegressionAnalysisConfidence2.htm
This gives the type of confidence intervals that you describe.
However, what I described was from Schaum's Outline on Statistics pg 243.
These confidence intervals are relevant to the sample population used to derive the best fit line.
@ Richard Gorsuch
Richard's simple definition of confidence intervals (CIs): "The confidence interval is the range from the lowest to highest population estimates of the parameter which are NOT significantly different from the observed value" looks to me incorrect, since it is missing a basic 'detail'. In fact, I think that a more correct wording is:
"The confidence interval represents values for the population parameter for which the difference between the parameter and the observed estimate is not statistically significant at the 10% level " from Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, Chapman & Hall, p214, 225, 233 (cited in Wikipedia).
Richard's wording as an incorrect flavour of 'certainty'. I insist that such an approach has demonstrated to be very dangerous because is instigating too many people to confuse confidence with certainty.
What he is reporting here looks to what in the Vocabulary of Metrology (VIM http://www.bipm.org/en/publications/guides/vim.html) is called "metrological compatibility" between data. I report here the definition (clause 2.47) "property of a set of measurement results for a specified measurand, such that the absolute value of the difference of any pair of measured quantity values from two different measurement results is smaller than some chosen multiple of the standard
measurement uncertainty of that difference".
Notice the "CHOSEN": there is not objective method or justification for the choice, it can only be the result of a DECISION (individual or shared).
Notice also that a 'simple' incorrect use of terms may bring to real mistakes: accuracy in wording may be difficult, especially when not using his/her native language (not the case of Richard), but is essential in science, namely in statistics.
I understand that statistics is useful because it helps in taking decisions, but nobody should forget that anybody can provide the truth –nor the true value.
@Fausto Galetto & @David Eccles
Notice that I have used here capital letters too: this simply because here any other type of emphasis is allowed apart quotes.
I may understand that an excessive use of emphasis can be irritating, but by no way I understand, @David, how this can prevent understanding. In all instances, if one is disturbed, one simple way I may suggest is to copy the text, e.g., in MS Word or in Pages, and transform all of it in lower-case letters or in any other editorial way he better prefers!
However,I agree, in principle, that a science or technical people should be able to correctly appreciate the concepts even without emphasis added. The problem I found, being old enough, is that me too I experienced that this appreciation does not happen quite frequently, and not only in the classroom.
The 95% CI estimates the statistical accuracy of a result.
You will find a compreensive review in the attached file.
This short note from sanofi-aventis has several flaws and misconcepions, starting right on page one.
The very first statement "A confidence interval [...] shows the range within which the true treatment effect is likely to lie" is already wrong. The correct statement (and this in fact has a different meaning!) would be: An (1-alpha)-confidence interval is a range of hypotheses that could not be rejected at a significance level of alpha.
Confidence intervals make no indication of any "truth"!
Second sentence: "A p-value is calculated to assess whether trial results are likely to have occurred simply through chance". Apart from the fact that this is again wrong and misleading: what please means "to occur by chance"? This is a circular reference! "chance" is quantified by p-values. You cannot explain one by the other. Hence this sentence does not explain anything. It rather is a misguiding pseudo-explanation! The correct sentence would be: "A p-value is the probability to get a test statistic at least as extreme as the one calculated from the observed data under the null hypothesis". There is no way to simplify this sentence without taking it wrongly. If one doesn't understand what this means, he should rethink if it is s good idea to work in empirical sciences. The only simpler statment that can be derived from this is: If all decisions about rejecting null-hypothesis are made based on the rule p