As I understand if you use the mean and SD for describing your data, this means it is normally distributed and parametric. If so, the test for variance should be simple T test.
If you want to estimate an interval (confidence interval or probability interval), you need distribution assumptions, but not necessarily the assumption of normality.
Almost every random variable has a distribution with a mean and a variance (except for some pathological cases that typically are not relevant in practice; e.g. the Cauchy-distribution does neither have a (finite) mean nor a (finite) variance).
Mean and variance are parameters of distributions; you can think of a distribution family as resulting from the same formula for different values of their parameters, like you can think of a family of straight lines resulting from the formula ax+b for different values of their parameters a (slope) and b (intercept). For the normal distribution, the parameter µ determines the mean, and the parameter σ² determines the variance. The Bernoulli distribution is fully defined by a single parameter p, with the mean being equal to p and the variance being p(1-p). The Poisson distribution also ha a single parameter λ that directly determines both, mean and variance. There are many more distributions that are given as a parametric formulas. It is therefore silly to say that "parametric" is associated with the "normal distribution".
One can use the sample mean (and also the sample SD) as an inferential measure, too. The sample mean and sample variance are generally unbiased estimators of the population mean and variance, no matter what the distribution of the variable is (except for exception, see above; however, the SD, being the square-root of the variance, is a biased estimator). The problem starts when you want to infer the sampling distribution of the sample mean (what you would need for hypothesis tests about the population mean). You know that the mean of the sample mean equals the mean of the variable and the variance of the sample mean equals the variance of the variable divided by the sample size. But what is unclear is the shape (functional form) of the distribution. This is required to calculate tail probabilities like p-values). This shape depends on the shape of the distribution of the variable. One can show that the sampling distribution is normal when the distribution of the variable is normal. All other cases are more difficult, and analytical solutions rarely exist. However, it is known that for small samples the sampling distribution is similar to the distribution of the variable (for n=1 it's identical), and the larger the sample size, the more similar it becomes a normal distribution (this is what the central limit theorem says). The t-test assumes that the sampling distribution is reasonably normal. This may be justified when the distribution of the variable is already approximately normal (unimodal symmetric) and/or when the sample size is "sufficiently large".
The U-test tests the stochastic equivalence, which is generally not about means at all. And it's not about medians either, as some people seem to think. Giving means and SD is unrelated to doing U-tests. If testing stochastic equivalence may or may not be useful, it just has little to do with means and variances. Sometimes (often? usually?) authors use the result of the U-test to claim directional statements like "A is larger than B". This is generally unjustified. Since the U-test just makes no assumptions about the distribution, it remains unclear how "direction" can be defined. For instance, it's possible to have two groups A and B with mean(A) > mean(B) and at the same time median(A) < median(B). Based on a significant U-test: will you claim A>B or AB) > 0.5 or if Pr(A 0.5.
Only under additional assumptions about the distributions, the U-test can be interpreted differently. Textbooks frequently note that the U-test is a test on a location shift, when a location shift is the only thing that may differ between the distributions This means that the distributions must be identical in every respect except for the location (same shape: same variance, same skew, same kurtosis, ...). I don't know a single practical case of such a scenario. Typically it is very clear that the variances are not similar between groups. Note that "location shift" of a distribution means that every quantile is shifted; the shift is reflected not only in the median but identically in the mean and in the mode. So in this unlikely case of having a simple location shift (and no difference in any other characteristic of the distribution!), then a significant U-test supports the claim about which mean is larger (mean(A)>mean(B) vs. mean(B)>mean(A)). Here in this special situation, in fact, providing the means and the SDs is useful to interpret the U-test. But as I said, this is a scenario I've never seen in practice. What I see frequently is an very unreflected use of the U-test whenever authors are afraid (or know) that the distribution of the variable is not even approximately normal but they still want to make a claim about the direction of a change.