Parametric tests are based on distribution models that can be expressed as a parametric formula. In this way, all tests based on the binomial, Poisson, exponential, gamma, beta, normal, Weibull, Cauchy, beta-binomial, negative binomial, gamma-Poisson, inverse gaussian, geometric, Gompertz, Gumbel, and so on.
For any given distribution model it is possible to derive the liklihood ratio statistic. The problem is usually to identify the probability distribution of that statistic (under H0). We know from Wilk's theorem that this is approximately Chi², but only in few cases we can give the exact distribution. One of these rare cases is the normal distribution, where we know that likelihood ratio statistic can be transformed into an F-statistc with an analytically tractable and known distribution (note the t² = F, so this applies to the t-tests).
Having an "exact distribution"of the test statistics is nice mathematically but usually is not that relevant in practice. In practice, we must guess reasonable assumptions, and all our neat mathematical models are always approximations of the real world. Therefore, in practice, there isusually not much difference of using an "exact" test or an approximate test, because in any case they rely on neccesarily approximate ideas of the real world (I hope I don't need to stress that our assumptions should certainly be reasonable and not in stark contrast to our observations and experiences). It is also better to use an approximate test on a reasonable distribution model (e.g. a Chi²-based test on a beta-distribution model) than an "exact" test like the F-or t-test in cases where the normal assumption is not reasonable.