The normal distribution is a symmetric distribution with well-behaved tails. This is indicated by the skewness of 0.03. The kurtosis of 2.96 is near the expected value of 3.
Test data for normality can be performed by using skewness and kurtosis by using a z-test is applied for normality test using skewness and kurtosis. A z-score could be obtained by dividing the skew values or excess kurtosis by their standard errors.
For more details and reference please, read in the link:
@Karami..Thank you very much. But I think the sample size is not too large, so i think we have to test for normality as I am testing hypothesis through AMOS.SEM...
The large sample size does not insure normality. As sample size increases the distribution of your data converge to the distribution of the sampled population. If the population is not normally distributed, then your sample will never be normally distributed no matter the number of samples that are collected. However, a large sample size does insure (mostly) that the distribution of all possible samples of that particular size will converge to normal. That is, the distribution of all possible samples of size 100 will be closer to Gaussian than will the distribution of all possible samples of size 5. So if you repeat your experiment 10,000 times with a sample size of 320 each time, then the distribution of those 10,000 means will be closer to Gaussian than if you had done the same experiment with a sample size of 100.
You can mostly dispense with testing for normality if you make the assumption that your data are not normally distributed. What you should think about is how you want to deal with this situation. If you decide to run a test anyway, then I would argue that your failure to find a significant departure from normally distributed is an artifact of a small sample size. If you collect enough samples, then there is a good chance that you will be able to reject the null hypothesis. This might take 100,000 samples or 10,000,000 samples. So at what point do you go with non-parametric methods versus relying on parametric methods to be robust in the face of "small" departures from the assumptions of that model? If you use a statistical test for normality then what you are arguing is that the test is a critical threshold where failing to reject the null hypothesis that the data are normally distributed is equivalent to the threshold where a violation of the normality assumption will have a measurable impact on the results. I have never seen this proven.
@Ebert. Thanks for comment. I am testing Model with AMOS Based SEM with 3 IVs, I mediator and I DV. As the assumption of SEM to test for normality, Thus I have run the test in SPSS and got the results
Skewness values are good i.e. 0.2 to 0.60.
Kurtosis values are 1.8 and 1.7 of 2 variables other other 3 variables are in 0.6 to .80 etc range. So I have to remove outliers or interpret as it is. ?
Outliers can be a problem. The key is why are there outliers? Are these mistakes? Errors in data entry, errors in recording surveys, errors in selecting participants, or other errors like this. If that is the case, then it is a good idea to delete the outliers. However, in many cases we get outliers because we have an insufficient sample size to correctly model the underlying distribution. Most of our results come close to the average (or median) value, but a few are more extreme. The extreme results are important because discarding them will result in an underestimation of the population variance. While this may make the results "look good," it compromises the integrity of the research. Ask this: if your results depend on the presence/absence of a single data value, how accurate do you think your conclusions will be?
@Ebert. Your last comment is very meaningful. As the values are recommended to be less than 1+/- indicate normality (Hair et al., 2006). But I think in my case, i think 2 variables have greater values than 1 i.e. 1.8 and 1.8 in the kurtosis which might be problematic ? Yes. if I want to remove these outliers, i am afraid of removing respondents that may cause to avoid AMOS Assumption bcs the sample size 320 and AMOS recommend to be above than 300. Since, outliers may reduce sample.
However, Studies like (George and Mallery, 2010) recommended that value upto +/-2 for skewness and kurtosis, Since, Can I use this reference to support my data?
How different is the result if you do the analysis assuming that everything is normally distributed (or close enough that it makes no difference) versus assuming that nothing is normally distributed?
The relevance of any one citation is limited by the extent to which the assumptions made by that author apply to your data. All such recommendations are prefaced by something like "within my experience this is true." Unless there is a formal mathematical proof that a skewness of +/-2 is effectively normally distributed for that method, then we are left with wondering what conditions might exist beyond the experience of the author that would invalidate the author's claim.
There are three approaches towards data analysis:
1) I select one model to the best of my ability and run a single analysis. The end.
2) I run several models and select the one that gives me an answer that is closest to what I think the answer should be. Never do this no matter how tempting.
3) I run several models and then compare them. I am interested in knowing how my choice of model influences the answer that I get. A strong effect should be present in all models. A weak effect (or no effect), might be present in only some of the models. I then decide if all I am interested in are the strong effects, or if I need to figure out which of the weak effects are true versus type I errors.
The values of skewness are considered in a continuous scale not ordinal level. So if you get the total score for the Likert scale, check the skewness and kurtosis.
In ordinal level, the suitable statistics in most cases are non-parametric.
The acceptable skewness level varies among different textbooks. Some consider .2 as the normal acceptable, while others go up to 1.0
I am agree with Prof. Timothy A Ebert . When sample is less than six one can do Non-Parametric test. Here is no more required normal distribution test. There are so many evidences that large sample means there is no surety that data would be normally distributed. If anyone wants to use a parametric test it is mandatory to use a normal distribution test.