I think it is the other way round. The Central limit theorem assures an asymptotic normal distribution for the mean, and therefore justification for the t-test.
As a rule of thumb n=30 is thought to be large enough for this 'asymptotic' normality, maybe except for extremely skewed distributions.
For moderate to large sample sizes and unequal variances however, the Welch test, which does not rely on equal variances within groups, outperforms the classical t-test. Non-parametric tests (as Wilcoxon-Mann-Whitney) do not rely on a specific distiribution, but implicitly rely on equal variances: under H0 are all samples from the same population.
Amongst others Ruxton and Zimmerman both investigated the performance of both parametric and non-parametric tests under violation of different assumptions.
Zimmerman, Donald W. "Comparative power of Student t test and Mann-Whitney U test for unequal sample sizes and variances." The Journal of Experimental Educational (1987): 171-174.
Article The Unequal Variance T-Test is an Underused Alternative to S...
Assuming that all other factors have been considered, it is rational to use t-test for even large sample, because the population variance is unknown for almost all cases in practice. When we use sample variance for population variance, we deal with t distribution. This is right philosophically. Numerically, when n> 30, statistic t and statistic z are close, or t distribution is close to normal distribution, and more important, the influence of the degrees of freedom on the shape of distribution gradually becomes null. That is why none of the statistical packages set up any rule to choose between t test and z test, rather than providing t test results regardless of sample size. "Student" proposed t test to overcome the inability of z test for small samples. T test on the other hand gains power when sample size becomes larger and we are not in any position to overcome inability.
The student's t-test is applicable for both small as well as large sample in the context of not knowing the population standard deviation of the target population. in a rare case when population SD is known a-priori, more valid test is Gaussian Z. Especially in medical research, in the absence of population SD the student's t-test is being used. Pl refer the page number 244 of Basic methods of Medical Research. The author is Dr. Abhaya Indrayan , AITBS Publishers and distributors. New Delhi - 110 051.
The t test as compared with z test is its advantage for small sample comparison. As n increases, t approaches to z. The advantage of t test disappears, and t distribution simply becomes z distribution. In other words, with large n. t test is just close to z test. and one don't loose anything to continue to use t test. In the past, for convenience, we use z table when n > 30. We don't have to do it anymore. In fact, all statistical packages use t test even n is large. This is easy, convenience with computer programming, and is correct. All statistical packages are good references.
My question is now, when performing sample size calculations, the sample size optained by an exact method (with t-distribution) is larger than that of a method which relies on asymptotic approximations.
Why is that? ( It is a question on an exam and I can't find a proper answer.