Just an example: consider the arithmetic mean on an iid sample of size n, assuming the observed variable has an expectation µ and a variance \sigma².
Then the standard error of the mean is \sqrt{\sigma²/n}; its asymptotic standard error is its standard error when n tends towards infinity, hence is 0 (hence arithmetic mean is a « good » estimator of the expectation, in the sense that you can in principle be as close as µ than you want to, if you can afford a high enough n).
ok...it means asymptotic standard error should always be 0? Actually i am fitting some data on GNUPLOT , it is giving me asymptotic error...so is software assuming n to be very high in the background? how to calculate it .. i mean what are the basic step to calculate it. Actually i looked it at the google but did not find satisfactory ans. Thanks
Not, no reason to be always 0. See the gnuplot documentation for what it calls « asymptotic error », I guess it is related to the asymptotic normality of least-squares estimators & relation between its covariance matrix and the hessian of your fit function, but only the documentation will give the definite answer.
Asymptotic standard error is an approximation to the standard error, based upon some mathematical simplification.
For example, we know from the Central Limit Theorem that the mean of n samples taken from independent identically distributed random numbers with finite variance converges in distribution to a normal distribution. The theorem doesn't guarantee that the means of a finite sample are normally distributed, but we often calculate the standard error of the mean under the simplifying assumption that the means ARE normally distributed. Emmanuel''s formula for the standard error is one such approximation.
My perhaps oversimplified understanding of asymptotic statistics would be estimators that are only approximately correct and become closer to 'true' with larger samples. For example, in section 4 of the paper found at the attached link, I note that for a function I derived for some curves to describe confidence intervals based on standard errors of prediction errors, that "...straight lines asymptotically approach these confidence bounds."
So sometimes the 'formula' one might be using is only very nearly correct if the sample sizes used are large.
I vaguely remember many years ago trying to deal with statistical testing for small samples, when the theory was for asymptotic statistics, meaning it will not be very accurate for small samples, and that was a problem. How inaccurate? So I had to work at figuring what was appropriate.
I think that there are likely different meanings for "asymptotics," depending upon context, like just about any other topic.
For your context, are you looking at a standard error estimator that is reportedly not just asymptotically 'correct' vs a standard error estimator that is only asymptotically 'correct'? In such a case, with a 'small' sample, you would need to use the former standard error estimator. You would only possibly want to use the latter if you have a 'large' sample size. So What is large? one might ask? That would vary by application and the particular estimator, so you would be safer not using the asymptotic standard error if you have a standard error estimator that did not depend upon asymptotic theory. Why would you ever use an asymptotic estimator? You would if nothing else was available, or if it was too difficult to use anything more exact and you appear to have a large enough sample size. You might check that out graphically, and you might also consider a simulation to compare to a closed form solution. I think you might think of an asymptotic estimator as one that could be used as an approximate closed form solution, when otherwise you need to write a simulation.
Any statistics/estimators that only approach 'truth' with large n would be asymptotic statistics. I think that standard error estimators are only one such area.
Perhaps I misinterpreted your question, and I suppose there are other meanings for asymptotics, but this is my best guess at what you might want to know.
Cheers - Jim
Article Properties of Weighted Least Squares Regression for Cutoff S...
@ Scott: the standard error of the mean computation, as σ/&sqrt;(n), does not rely on the normal approximation or the central limit theorem, it only assumes that observations are independant, identically distributed and have an expectation and a variance.
Additionnaly, a symmetrical distribution avoids pitfalls in the intuitive interpretation of thhe standard error of the mean (nd standard error also, by the way).
Central limit theorem (normality) is « only » required for building confidence intervals or making tests on the mean.
This sound purely statistics, I have a result on Goodman and Kruskal lambda analysis and SPSS generated table that include
1. Asymptotic standard error (not assuming null hypothesis) - Always zero (e.g 0.057)
2. Asymptotic standard error (assuming null hypothesis) - always > zero (e.g 3.245)
What is the difference between these two. Please help, so I can interpret my result on strength of association between choice of fruit or chocolate in predicting obesity-risk. I have already used the lambda values but feel the asymptotic std errors have meanings (WHICH I DONT KNOW).