1. The normality assumption will never be exactly true when one is working with real data.
2. The normality assumption for linear regression applies to the errors, not the outcome variable. The usual statement is that the errors are i.i.d. (i.e., independently and identically distributed) as Normal with a mean of 0 and some variance. Independence and homoscedasticity are more important assumptions than normality.
3. If you have reasonably many data, then homoskedasticity (equal variances of the residuals) is more important than normality.
1. The normality assumption will never be exactly true when one is working with real data.
2. The normality assumption for linear regression applies to the errors, not the outcome variable. The usual statement is that the errors are i.i.d. (i.e., independently and identically distributed) as Normal with a mean of 0 and some variance. Independence and homoscedasticity are more important assumptions than normality.
3. If you have reasonably many data, then homoskedasticity (equal variances of the residuals) is more important than normality.
Alexander said "The normality assumption for linear regression applies to the errors, not the outcome variable." That is true, when you are using regression.
For survey statistics, for design-based (randomized sample selection) survey sampling, if you use ratio or regression estimation, they assume normality of the data, but the central limit theorem makes that not a large problem for means and totals. For strictly model-based survey estimation, which just makes the usual use of regression (including ratio estimation), we are only looking at the estimated residuals (or their random factors in weighted least squares regression), and again, even that does not generally matter very much.
Regarding heteroscedasticity, I have not used SEM, so, like hypothesis tests, perhaps they require homoscedasticity? I hope not. That seems a mistake, as heteroscedasticity often naturally occurs in the error structure. I prefer to leave it there, and use weighted least squares (WLS) regression. But if you feel you must, you can multiply each side of the regression equation by the (estimated) regression weight for a given subscript_i, and use the transformed equation, which would be homoscedastic - or approximately so. But transformations can often be misinterpreted, as well as hypothesis tests, and often both can be avoided. Other tools, such as the estimated variance of the prediction error, are often much to be preferred. Model validation with test data can be very helpful.
...
Remember that with regression, you are looking at the conditional distribution of y, for a given x (or function of regressors), and not the unconditional distribution of the y population.
Cheers - Jim
PS - To see the conditional y distributions for given x-values, note the points on the graphs shown in here: https://onlinecourses.science.psu.edu/stat501/node/253
Also, the stopping distance example graph at the top of this file may help you see the difference between the y distribution and the conditional distribution of y, given x:
You can also read chapter '2.3 Violating the Assumptions; Exception or Rule?' from Zuur et. al. 'Mixed Effects Models and Extensions in Ecology with R' where you will find short summary on assumptions for regression models and how to deal with them painlessly.
A very important assumption in linear regression is of course that the relationship between the dependent variable and each independent variable is a straight line. This may be judged by plotting, for example the residuals of an initial fit against each independent variable. If it is not a straight line, you may have to transform some of the variables, and if that fails, you may have to fit nonlinear regression models. Economic type response variables are notoriously log-normally distributed, in which case you would have to take the logarithms before fitting the regression.