I have the formula Y = (A-B)/m where A and B are averages from samples with sizes nA and nB, and m is a "slope" determined from a linear regression from q points. There are standard errors given for A, B and m (sA,sB,sm). I can calculate the standard error of Y by error-propagation as

sY = 1/m * sqrt( (sm)²*(A-B)/m² + (sA)² + (sB)² )

Now I want to get a confidence interval for Y, so I need the degrees of freedom for the t-quantile. A rough guess would be nA+nB+q-3.

However, somehow I doubt this, because if m would be known theoretically, sY would be simply sqrt ( (sA)²+(sB)² ) with nA+nB-2 d.f. - But when m would be known because q -> Infinity, then sm->0 and sY -> sqrt( (sA)² + (sB)² ) but, following the guess above, with infinitely many d.f. (df = nA + nB + Infinity - 3). Both cannot be correct at the same time.

So what is the correct way to get the d.f. and, hence, the confidence interval for Y?

(please assume that the errors of A, B and m are all normally distributed; please do not discuss alternatives to or applicabilities and problems of confidence intervals. You may well assume that this is a stupid question, because I may have overlooked some simple fact or made a wrong derivation... this can easily be the case, and I still would be thankful for any help)

Thanks!

More Jochen Wilhelm's questions See All
Similar questions and discussions