No. Don't just delete "extreme" data. What kind of principle is that? Why not just delete any data you don't like, and invent some other data you do like? That would certainly make research easier..... :/
Yes, I agree with Prof Dijkstra. ML SEM has an ability to produce consistent parameter estimate even in the sense of non-normality. But, today, researchers still need to know the normality test to justify their test is suit for parametric testing.
So, you can check at the Mahalanobis Distance to assess of normality distribution in Amos. The absolute value of skewness 1.0 or lower indicates the data is normally distributed. If normality is not fulfilled, we can remove the non-normal items from the measurement model and continue the analysis. Another option is to remove the farthest observation from the center of distribution (outlier).
However, the most popular method lately is to continue with MLE (without deleting any item and also without removing any observation) and re-confirm the result of analysis through Bootstrapping.
First of all you examine the skewness of that particular variable. If the skeneww exceeds the absolute value of 1.5, it indicates the distribution of that particular variable has depart from normality. Check the Mahanolobis's Distance to identify extreme data. By deleting some extreme data from you data set, the distribution of that variable should be normal.
No. Don't just delete "extreme" data. What kind of principle is that? Why not just delete any data you don't like, and invent some other data you do like? That would certainly make research easier..... :/
Dear Prof Dijkstra, thanks for your input and article!
I am aware that the ML estimator would be able to handle non-normal data but suppose I need to report normality as Wan Mohamad mentioned, how would I do so? I understand that linear regression assumes that the residuals are normally distributed, and one can check this assumption by plotting the standardized predicted values against the standardized residuals (or by looking at the skewness and kurtosis of the standard errors). In SEM with both observed and unobserved variables, how do I check for non-normality? Would I be doing that with the errors of the observed indicators only? Thank you in advance for your guidance.
Wan Mohamad Asyraf, thank you for your input. Since my study is confirmatory in nature, I do not prefer to delete any items (I would rather transform the variables) nor do I believe in simply removing outliers. But your suggestion to use bootstrapping as 'confirmation' of results is new to me. Would you happen to have references for this method? Thank you in advance.
you already got some valuable suggestions. However, in some cases it may be necessary to check your data for multivariate normality. If this is the case, you could chose the MVN package from the program R: