This is still my old problem...

Obviously, the likelihood and the "sampling distribution" are related. The shape of the likelihood converges against the shape of a normal distribution with mean = xbar and variance = s²/n (difference is just a scaling factor to make the integral equal unity).

Consider normal distributed data, the variance s² being a nuisance parameter. Ignoring the uncertainty of s², the likelihood again is similar to the normal sampling distribution. The confidence interval can be obtained from the likelihood as the central range covering 100(1-a)% of the area under the likelihood curve. This is explained for example in the attached link, p. 31, last paragraph.

The author shows that the limits of the intervals are finally given by sqrt(chi²(a))*se (equation 13) and states that sqrt(chi²(a)) is the same as the a/2-quantile of the normal distribution. Then he writes "The test uses the quantile of a normal distribution, rather than a Student t distribution, because we have assumed the variance is known.". Ok so far.

How is this done in the case when the variance is unknown? I suppose one would somehow come to sqrt(F)*se, so that sqrt(F) is the quantile of a t-distribution...

Other question: How is the likelihood (joint of mu and sigma²) related to the t-distribution? It there a way to express this in terms of conditional and/or marginal likelihoods? This could possibly help me to understand the principle when there are nuisance parameters in general.

http://www.math.mcmaster.ca/~bolker/emdbook/chap6A.pdf

More Jochen Wilhelm's questions See All
Similar questions and discussions