It is desirable that for the normal distribution of data the values of skewness should be near to 0. What if the values are +/- 3 or above?
The values for asymmetry and kurtosis between -2 and +2 are considered acceptable in order to prove normal univariate distribution (George & Mallery, 2010). George, D., & Mallery, M. (2010). SPSS for Windows Step by Step: A Simple Guide and Reference, 17.0 update (10a ed.) Boston: Pearson.
It depends on mainly the sample size. Most software packages that compute the skewness and kurtosis, also compute their standard error.
Both
S = skewness/SE(skewness)
and
K = kurtosis/SE(kurtosis)
are, under the null hypothesis of normality, roughly standard normal distributed.
Thus, when |S| > 1.96 the skewness is significantly (alpha=5%) different from zero; the same for |K| > 1.96 and the kurtosis.
To manually compute the standard errors, see the formulae for the variance of skewness and kurtosis (https://en.wikipedia.org/wiki/Skewness, https://en.wikipedia.org/wiki/Kurtosis ) and take the square root.
A rule of thumb is -1 to 1 amplitude. Nevertheless, as said by Casper you should calculate CI 95% for adequate results reporting.
I have come across another rule of thumb -0.8 to 0.8 for skewness and -3.0 to 3.0 for kurtosis.
A normality test which only uses skewness and kurtosis is the Jarque-Bera test. The idea is similar to what Casper explained.
(One remark: It has an asymptotic chi-squared distribution but the convergence is very slow and empirical tables exist for small samples.)
i think actually you want to check the normality , so instead go for any rule of thumb check jaurqe Bera test, it is based on skewness and kurtosis so acceptance of the null in this test will tell that skewness and kurtosis are i acceptable range for normality, and reject mean both are not in acceptable range for normality of the data
There are two different common definitions for kurtosis: (1) mu4/sigma4, which indeed is three for a normal distribution, and (2) kappa4/kappa2-square, which is zero for a normal distribution. It depends on your software settings which value is computed, although most software (Excel, SPSS, R-package 'moments') use the second definition.
When working with the first definition it is, as Peter states, not surprising to find kurtoses close to 3; when working with the second definition it is more surprising.
"Normal distribution" is a human concept, an example of this is "the best method to check normality is visual inspection", in this context, using the classification provided by Casper Albers (see above) (2) only 0 is classified like "mesokurtic", if it´s not = 0 then is platykurtic or leptokurtic...
I have also come across another rule of thumb -0.8 to 0.8 for skewness and -3.0 to 3.0 for kurtosis.
It's whatever range gives you an acceptable p-value for the Anderson-Darling test
The scientific standard in research journals is to use the Kolmogorov-Smernov test. If it is not significant, the distribution can be considered normal. For very very small samples, this test may not be adequately powered and you fail to reject non-normality.
As to my knowledge the Shapiro-Wilk test is more powerful than the Kolmororov-Smirnov test (Karen, please correct me when I am wrong). But performing a test to reject or accept a hypothesis (like "the sample is taken from a normal distributed population) is not (at least not directly) related to the question for an "acceptable range" of deviations. If one would use a test to get a decision about this question, one would need to define a reasonable alternative hypothesis. One must think of how/where to set alpha, and a well-defined alternative is required to set beta. Then and only then a test result might be sensibly interpreted. However, exactly the sensible choice of alpha and beta *requires* to have a reasonable idea about the size of relevant deviations (and this was the question about, as I understood).
I do not see that there might be a general answer. First of all it all depends on the purpose (why is normal distribution important in the particullar context). Secondly, the deviation is not "one-dimensional". Strictly, the dimensionality of the deviation is not defined, so there is no way to define *the* deviation at all. But one can look at some few particular aspects, like skewness and kurtosis. And even for these two it is likely important to consider their combination. I mean to say: the range of acceptable deviations for the kurtosis might depend on the actual value of the skewness (and vice versa). This is a 2-dimensional problem (think of the acceptable range as for instance an elliptic region on the plane over these two parameters). A correlation between kurtosis and skewness might also be important, so that not all combinations of values for theses parameters are possible, further complicating the whole story (the region of acceptable values might not be simply elliptic and have a rather complicated shape).
So I think the only way to answer this question is by experience: trial and error. If the results obtained are [not] good enough for the purpose for variables that deviate by a particular amount in kurtosis and skewness from a normal distribution, then these deviations (in the given combination) are obviousely [not] acceptable. Maybe one can extrapolate [un]acceptable deviations from similar data/studies that are already performed.
The trouble with the Kolmogorov Smirnov test is that it performs acceptably when the mean and standard deviations of the population are known. However, when we substitute for these the sample mean and standard deviation it does not perform well.
The other problem is implicit in your question : what do we mean by 'acceptable'? This has to depend on context. 'Desirable' for what?
If we're talking about the old canard about normally distributed data in regression, then the answer depends on simulation studies. But note that it is not the distribution of the predicted variable that is assumed to be normal but the sampling distribution of the parameter being estimated.
But you have learned from this discussion something important about rules of thumb. Everyone has a different thumb,
When your sample is very large, KS test becames very sensitive to small variations. Therefore I divide the sample skewness and kurtosis by their standard error to get the test statistic, which measures how many standard errors separate the sample skewness or Kurtosis from zero: If Z values for skewness and kiurtosis are between −2 and +2, you reach conclusion that you sample might be symmetric. I hope the attached file helps you. Cheers.
Aslam,
Everyone have different ways, but common purpose.
The first step for considering normal distribution is observed outliers.
The acceptable range for skewness or kurtosis below +1.5 and above -1.5 (Tabachnick & Fidell, 2013). If not, you have to consider transferring data and considering outliers.
So, if you could not take an acceptable range, you may not be getting correct analysis, especially CFA and other statistical analyses.
Hope help
Hello fellow researchers,
I am having a similar issue. I too have small Skewness and Kurtosis values, however when running both these tests I receive significant values, indicating that the data are not normally distributed. In addition the G-plot graph shows fidelity to the expected value. I have a sample size of 792 and was investigating an independent variable.
Can anyone shed light on this issue? Is there something blatant that I could be disregarding?
Thank you,
Gabriella
If the question is of normality, go with Anderson-Darling (AD) test (KS does not perform as well as AD on the tails, making AD the golden standard of normality testing in industrial applications; not sure about research). If your primary concern is kurtosis, KS test is fine (I'm using it very successfully). If you are concerned about skewness as well, then AD and Shapiro-Wilk (SW) are your friends. Shapiro-Wilk test has the best power for a given significance, but it is slow when dealing with large samples, and AD follows closely enough. There is a Royston's approximation for the Shapiro-Wilk test that allows to use it for bigger samples.
What was said about KS (and Shapiro-Wilk) requiring the apriori knowledge of mean and standard deviation is true, which is another advantage of AD test. If you have to go with KS or SW, I would first remove outliers, estimate the mean and standard deviation, and then apply the test.
The BIG QUESTION is ... "why do you need to test for normality" ... if your testing for normality is the form you use to choose your statistic, then it is intrinsically wrong !!
For big enough data (say in biological science above 100 or 200) the t-test and Wilcoxon have a 95% chance to tell you the same thing ... in today's fast personal computing you could even do both and see if they disagree, if they don't, use the t-test if you wish.
I like the non-parametric tests, and if you give them enough size they are very robust against the parametric ones.
For a nice thorough discussion see http://stats.stackexchange.com/questions/2492/is-normality-testing-essentially-useless
The values for asymmetry and kurtosis between -2 and +2 are considered acceptable in order to prove normal univariate distribution (George & Mallery, 2010). George, D., & Mallery, M. (2010). SPSS for Windows Step by Step: A Simple Guide and Reference, 17.0 update (10a ed.) Boston: Pearson.
Some suggest that the most acceptable values for the two statistics should range between -1 through 0 to +2. check http://psychology.illinoisstate.edu/jccutti/138web/spss/spss3.html
Statistical significance levels of .01, which equates to a z-score of ±2.58. is acceptable.
https://statistics.laerd.com/premium/tfn/testing-for-normality-in-spss.php
Skewness and Kurtosis can supply aditional info, when I coordinate a big project with 200 field researchers lifting data (distributed in 100,000 k2, 3.7 mll/hab, n=9850), and randomization I think "probably" will had a "bias" and analyzing one particular data from one field researcher (quality control) and have a doubt about "field reliability values" (age variable in Health-social research is useful to check "normality") with Shapiro-Wilk or K-S, then I check Skewness and Kurtosis "proportion", if Skewness error are bigger than Skewness value or Kurtosis error are bigger than Kurtosis value, "Houston...we got a problem", and I verify this "non normality" in empirical form "checking through talking" with the field researcher (methodology auditing/supervising); I don´t know about other researchers testing this.
Check this link, I think it answer your question, and enriched me...
https://www.researchgate.net/publication/262151892_Introduction_to_SPSS
Article Introduction to SPSS
i do not know much about other disciplines, yet to my knowledge, most of the researchers in the field of social science are following a less stringent criteria based on the suggestion by Kline (1998, 2005). Data with a skew above an absolute value of 3.0 and kurtosis above an absolute value of 8.0 are considered problematic.
The best test for normality is Shapiro-Wilk test , you can use SPSS for this purpose , but in other hand , you can use many other methods to test normality , one of these methods is skewness or kurtosis and the acceptable limits +2 / - 2 .
It is very arbitrary judging the normality of variables only from skewness; pls see sapiro-wilk or kolmogorov-smirnov tests from SPSS.
Absolute values > 0.2 indicate noticeable skewness (Hildebrand, 1986).
In my travels the rule of thumb I have come to know for both is between -1 to 1. In Stata you have to subtract 3 from kurtosis.
Closer to zero the better.
According to Bulmer M. G. (1979), Principles of Statistics.
it can be consider normal when -1
Byrne, 2010 suggest kurtosis value of 3 for a normal, while values exceeding 5 indicates data are nonnormallly distributed ( Bentler, 2006).
But, again, Jochen answers also need to consider.
Naeem,
I have learned much from reading the wonderful answers provided by other researchers to your questions. I was recently asking the same questions related to exploring the normality of my data-set before deciding the use of parametric analysis to confirm or reject my research hypotheses. What I learned was that the indicator value range I choose for the skewness and kurtosis of my data were important for several reasons:
I used indices for acceptable limits of ±2 (Trochim & Donnelly, 2006; Field, 2000 & 2009; Gravetter & Wallnau, 2014) Hope this helps!
Janet
References:
Trochim, W. M., & Donnelly, J. P. (2006). The research methods knowledge base (3rd ed.). Cincinnati, OH:Atomic Dog.
Gravetter, F., & Wallnau, L. (2014). Essentials of statistics for the behavioral sciences (8th ed.). Belmont, CA: Wadsworth.
Field, A. (2000). Discovering statistics using spss for windows. London-Thousand Oaks- New Delhi: Sage publications.
Field, A. (2009). Discovering statistics using SPSS. London: SAGE.
Just for fun I paste a link for an article by Firefox researchers on self-selection bias for you to review. The article discusses their considerations when performing survey research on specific populations. I also provide a link to a PPT on how to transform skewed data.
https://jonoscript.wordpress.com/2010/10/09/test-pilot-self-selection-bias-and-how-to-compensate-for-it/
http://webcache.googleusercontent.com/search?q=cache:-6ptop8m30EJ:www.utexas.edu/courses/schwab/sw388r7/SolvingProblems/ComputingTransformations.ppt+&cd=3&hl=en&ct=clnk&gl=us
http://poq.oxfordjournals.org/content/75/2/349.short
The values within the range of +1.96 and -1.96 are the said to be acceptable. Beyond these limits can be called skewed data !!
i cant find -+1.5 skewness-kurtosis in tabachnick and fidell, 2013. does anyone know which number in book?
Dear Chalamalla Srinivas,
Multi-normality data tests are performed using leveling asymmetry tests (skewness < 3), (Kurtosis between -2 and 2) and Mardia criterion (< 3). Source Chemingui, H., & Ben lallouna, H. (2013).
May I get the reference for this statement?
Dear
Refering to some publications I conclude that skewness and kurtosis test for normal distribution of data could be ranged at limit ±2.
Thank
Marius
my professor in class said that both indicator should within +/- 1. But, from all references I found, +/- 2 is acceptable. By the way, thanks for the detailed information.
Thedorus,
Thank you for participating in the discussion. Would you be willing to share a bit about the different acceptable limits for skewness and kurtosis (provided in the literature) based upon the domain of the study, types of variables, and use of the data, such as for a survey versus testing instrument?
For example, high stakes testing using cognitive content requires high reliability, and therefore indices for all measures of analyses are narrower. When testing data using psychosocial variables and with high response numbers compared to items the analyses may not require such rigor to gain the same value because the factors themselves are broadly defined.
I am interested to learn more.
Kind regards,
Janet
I'm sorry but I think you're all wrong (but one comment I read).
You don't ask the good question.
Normality's assessment firstly depends on the variable's mechanism: additive/multiplicative errors for normal/log-normal (or other mechanism).
Then it depends on the statistical test you want to use.
If you want to know if your kurtosis/skewness has an impact on the normality of your variable, you should first check the dependence of the power of the test used against different values of kurtosis/skewness.
Your kurtosis and skewness won't have the same impact on a one-way anova or on an ancova.
we developed a new procedure in order to determine skewness. j.ponte.2017.2.34
Article TO DETERMINE SKEWNESS, MEAN AND DEVIATION WITH A NEW APPROAC...
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjR89T9tIDTAhUBxrwKHaMQDcwQFggbMAA&url=http%3A%2F%2Fdocuments.routledge-interactive.s3.amazonaws.com%2F9780415628129%2FChapter%252013%2520-%2520Tests%2520for%2520the%2520assumption%2520that%2520a%2520variable%2520is%2520normally%2520distributed%2520final_edited.pdf&usg=AFQjCNHEbQNbsQHloAyS46L0zQET-r38qA&sig2=RoRgeeebb_bVgM124qrBZg
I found this tutorial quite helpful :)
Do the t-test and ANOVA really assume normality?
https://www.youtube.com/watch?v=yNdlGRz-Z04
I was looking for some understanding of this problematic and found this discussion.
Thank you all for your enlightening comments, especially Janet's extensive ones.
As I see, most people simply use some normality test, such as D'Agostino's, Jarque–Bera, Anderson–Darling, Kolmogorov–Smirnov, or Shapiro–Wilk.
One opinion that came to my attention yesterday was that normality tests are 'essentially useless'! The argument was two-fold: 1) those are null-hypothesis tests *against* normality and, therefore, are only informative if the data is *not* normal; 2) for small datasets, those tests almost always give positive answers (incurring in false-positive errors), while for really large sets, almost always give negative answers (incurring in false-negative errors). As a consequence, many people advice forgetting about those tests and check only for comparisons of kurtosis and skewness with their standard errors.
What do you think about this opinion?
Dear Renato,
You're totaly right about the use of normality tests.
Therefore, taking into account kurtosis and skewness won't be THE solution, they are more likely to be clues.
When assessing normality, one should take several clues in order to dig what underlying mechanisms the analysed variable is afected by.
QQplots, residual vs predicted values plot (very usefull graph when assessing normality and log-normality), histogram AND skewness & kurtosis are good clues. Don't forget also to take into account the sample size of your data which will tend to its real distribution with a high sample size.
Last thing would be to use a model on the variable you want to analyse before using all of those graphs and statistical parameters. Some variables could have an hidden effect on your variable (e.g. hypotermia experiments could bias animal's body temperature distribution).
YH
Different methods and formulae are there for calculating skewness. Different methods give different values of skewness for the same data set. So, it is very difficult to decide its acceptable range that can lead to believe that distribution is not skewed. The same thing happens with Kurtosis. Different computer software also give different values of each of them for the same data set. Moreover Kurtosis shows the pickedness of Normal Probability curve, it does not decide the normally of distribution. It has nothing to do with normality of distribution. So, to decide the normally of distribution, we should use certain Normality Test. I have discussed some such tests in my paper "Normality Test", which is available on RG.
Use Robust statistics if you doubt, very easy and for small data you do it quickly in your hand
1. For small samples (n < 50), if absolute z-scores for either skewness or kurtosis are larger than 1.96, which corresponds with a alpha level 0.05, then reject the null hypothesis and conclude the distribution of the sample is non-normal.
2. For medium-sized samples (50
I agree with Javier that The values for asymmetry and kurtosis between -2 and +2 are considered acceptable in order to prove normal univariate distribution.
You can also go for various test for Normality check like shapiro wilk test, Kolmogorov smirnov test or even by plotting QQ plot.
Some says for skewness (−1,1) (−1,1) and (−2,2) (−2,2) for kurtosis is an acceptable range for being normally distributed. Some says (−1.96,1.96) (−1.96,1.96) for skewness is an acceptable range. so better go for various normality test like SWT, KST or even by plotting QQ plot.
I agree with Mohsin Altaf's answer. Skewness is a measure of (a)symmetry and kurtosis is a measure of peak-ness (how high the central peak is). It is good practice to keep in mind the expected values of these two statistics for a normal distribution so that you can guide your judgement: skewness about zero and kurtosis about 3. I find the following link very informative:
http://www.itl.nist.gov/div898/handbook/eda/section3/eda35b.htm
You may find this very helpful:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3591587/
For good structural equation modeling analysis I recommend the value should between +1 and -1.
Just to clarify: Contrary to what many sources state (incorrectly), kurtosis is most definitely NOT a measure of "peakedness" of a distribution. Rather, it is a measure of the outlier character of data or a distribution, as compared to that of a normal distribution. See here for a clear explanation: https://en.wikipedia.org/wiki/Talk:Kurtosis#Why_kurtosis_should_not_be_interpreted_as_.22peakedness.22
Fit the data using a flexible parametric distribution such as the skewed generalized t (SGT) or the skewed generalized error distribution (SGED) and use a log-likelihood ratio to test the null hypothesis of normality of the data. Simple kurtosis and skewness statistics, including the BJ test may give misleading results because of outliers.
This is the best site, explaining all SATAT analysis in detailed:
https://statistics.laerd.com/spss-tutorials/testing-for-normality-using-spss-statistics.php
The standard which is fiollowed is skewness between -1 and +1
Kurtosis between -3 and +3.
A normally distributed data has both skewness and kurtosis equal to zero. It is near-normal if skewness and kurtosis both ranges from -1 to 1. Values outside that range may still be "acceptable".
for skewness, it is prefered not to exceed one; less than .20 is excellent,
Hi Dear
According to George and Mallery (2016), the arithmetic mean is a good descriptor if the Skewness value obtained is within ±2.0 cut-off point. Byrne (2016), set the cut-off point For Kurtosis, which is less than 7 to be acceptable.
Best Regards
Almaamari
I would approach the problem visually and exploratively because in my experience every statstical descriptor or test requires mathematical prerequisites or model-assumptions.
Use QQ-plot to compare to Gaussian or ABC-plot to measure Skewness.
Michael, J. R. (1983). The stabilized probability plot. Biometrika, 70(1), 11-17.
Ultsch, A., & Lötsch, J. (2015). Computed ABC Analysis for Rational Selection of Most Informative Variables in Multivariate Data. PloS one, 10(6), e0129767. doi:10.1371/journal.pone.0129767
The R language provides the relevant implementations. Write to me, if you require package names.
Most sources cited here are books, I would like to add the article of Ryu (2011). Following this article the cut off point is -2 / +2.
Ryu, E. (2011). Effects of skewness and kurtosis on normal-theory based maximum likelihood test statistic in multilevel structural equation modeling. Behavior Research Methods, 43(4), 1066-1074.
Dear Everyone,
I am having an awkward situation with my data. If I look at the histogram, z-values criteria(only 5% to have greater values greater than 1.96 etc), skewness is in between -1 and 1, all these criterion are fulfilled. But the normality KS test and Shapiro values are significant. I have tried the transformation but still it is not working. Can someone tell me can i argue with respect to histograms and these z values criteria that my data is normal?
Thanks in advance
Hello Fawad,
In many situations the KS and Shapiro-Wilk tests are too sensitive. In my opinion the best you can do is avoid them and use the histograms and skewness values to underpin normality.
Thanks for your guidance.
Actually, I have to run the Multivariate regression. For that the first assumption is the normality and second is the homogeneity. Lets say my data is normal on the basis of histograms and skewness values. But I am also getting the homogeneity "Levenes test" significant.
I have three different products and a variable performance that contains the values for all these 3 products. When I choose "Multivariate" and select Homogeneity test then it gives me significant result and also the Box value is significant. I have tried to use transformed option (Cube, square, Log etc) but the problem is that if i square the values then it gets right for one product but gets wrong for the other product. I dont know what to do in this situation?
Thanks
I just came across the following site. Please have a look. May be there is some use.
https://stats.stackexchange.com/questions/245835/range-of-values-of-skewness-and-kurtosis-for-normal-distribution
Normally..the range is -1.96 thru +1.96. This is the range of normal distribution or dispersion distance from 0 to both continuum
Check this link may be helpful
https://stats.stackexchange.com/questions/245835/range-of-values-of-skewness-and-kurtosis-for-normal-distribution?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa
Best wishes
What I understand about Skewness is..
Distribution of Data Left Side = Positive Values
Distribution of Data Right Side = Negative Values
1. Value = < -1 or >1 (Highly Skewed)
2. Value = between -1 and -0.5 or 1 and 0.5 (Moderate Skewed)
3. Value = between -0.5 and 0.5 (Distribution is Symmetric)
Another one is the w/s-test for normality. Used to determine the significance of the difference of a frequency distribution based on a given sample and a normal frequency distribution. Can be computed by hand.
Found in 100 Statistical Tests, 3rd Edition, by Gopal K. Kanji, 2006. Sage: London.
there are varied views about. stringent limits are +1 To -1 whereas liberal authors recommend +3 to -3.
Further p value of Kolmogorov-Smirnov should be insignificant.
KURTOSIS: Considered not normal if exceeds 3
SKEWNESS: To conclude normal or not, calculate D'Agostino K square test equivalent to chi sq. df = 2 or about 5.999. If K sq. > 5.999 or > 6.0, then the skew characterizes a non-normally distributed data.
See link below:
https://en.wikipedia.org/wiki/D%27Agostino%27s_K-squared_test
It's all very well to say use X as a cut off for some number. But, you are taking your data and boiling it down to one number. Just as with the boiling method of cooking, you can lose the flavor of what's really there.
I recommend making a Q-Q plot to see if something isn't lurking in your data that gets lost by making these calculations. It's also possible the QQ plot will suggest that you haven't really collected enough data.
Clem,
Could you describe your methods for the QQ plot? Thank you for sharing. I agree other statistical methods are useful and can reveal underlying characteristics of the data worthy of consideration for drawing valid conclusion.
Janet
Janet Hanson - If the data is perfectly normal, the dots should (in theory) all fall on the QQ Plot line but due to chance the last few bits on the ends probably won't.
On the other hand, if there's a hint of an S or C shape, where the ends gently swaying away from the QQ Plot line, then something else may be going on even though statistically your Skewness and Kurtosis cut off numbers say you probably have a normal distribution. In that case, you may want to get more data (if time and resources allow) to see if anything interesting is happening with the "Outliers".
What I'm seeing in some data I'm analyzing, is that there's a gradual deviation from the normal distribution depending on a gradual loss of functionality. It's fsirly subtle but I wouldn't have noticed it if I just relied on numeric values or a histogram Plot. With a series of QQ Plots, it's clear that something is awry.
I hope that helps.
When the normality of your data is in question, it might be worthwhile to look into robust estimators. This is a very readable introduction:
Field, A. P., & Wilcox, R. R. (2017). Robust statistical methods: A primer for clinical psychology and experimental psychopathology researchers. Behaviour Research and Therapy, 98, 19-38, doi:10.1016/j.brat.2017.05.013.
Thank you Dr Naeem Aslam . I have learned a lot by reading the above answers.
Thank you Dr Janet Hanson for detailed explanation.
A normal distribution is symmetric.and mesokurtic.
Hence for a normal distribution, skewness is 0 and kurtosis is 3.
There is no any interval value for the skewness as well as for kurtosis of a normal distribution.
According to George & Mallery (2016), the value of kurtosis with a value of skewness between +2 is acceptable. For reference
George, D., & Mallery, P. (2016). IBM SPSS Statistics 23 Step by Step: A Simple Guide and Reference (13th ed.). New York: Routledge.
though a normally distributed data has zero skewness, which means maximum observations are lying in the centre. Similarly the value of kurtosis fora normally distributed data is 3. But in reality we hardly get completely normal data, so some deviations are permissible. Hair et al. (2012) have prescribed + 1to -1 for the same.
To me its a question of perspectives. For a quantitative finance researcher a K>3 is welcome as that indicates a FAT Tail. He will be more interested to get such a value. Whereas a consumer behaviour researcher would love to observe K either lower than or equal to 3 for obvious reasons. Kurtosis values thus are perspective based and heuristics cannot be developed easily.
I have been observing the above posts with interest, particularly because I think that determining satisfactory skewness and kurtosis is probably more complex than indicated by the typically used rules of thumb.
The following article by H. Y. Kim (2013) indicates, for example, that sample size can influence how researchers should use and interpret skewness and kurtosis (e.g., with small samples, easily obtained z values should be used) and that different stats packages might provide different information concerning kurtosis. Here is the URL:
Article Statistical notes for clinical researchers: Assessing normal...
The following YouTube presentation is clear and thoughtful:
https://www.youtube.com/watch?v=IiedOyglLn0#t=218.844276
I would be happy for people to react to the information in these sources.
Robt.
Thanks for responding, Aamna. I should have been more explicit. By "rules of thumb" I was referring to the notion that skewness or kurtosis values that lie in the range of -2 to +2 are satisfactory. I think this isn't always the case, and might be so only for samples greater than 300 or so.
I was recently examining some data (N = 200) in which skewness and kurtosis were less than |1|, but the histograms clearly indicated the data were quite skewed and leptokurtic. At first, I couldn't work out why there was a discrepancy between what the histograms and the descriptive stats were telling me.
It was only when doing what Kim (I provided the reference above) recommends that I was able to obtain the statistics for skewness and kurtosis that matched up with what the histograms looked like. In essence, Kim recommends dividing the skewness and kurtosis output from SPSS by the relevant standard errors (also provided by SPSS) to obtain a z value if numbers in the sample are less than 300. It's a little more complex than that, but easy to understand from Kim's article, which is open access.
I hope that's helpful, but please feel free (anyone) to come back. Communication is the beauty of ResearchGate.
Robt.
I believe the reason you are getting two general answers to this question is that different programs produce different values for kurtosis. I believe SPSS subtracts 3 (the kurtosis value for a normal distribution) so that negative values represent platykurtic and positive values reflect leptokurtic. In SPSS if you are unsure you can use the standard error to determine whether your value differs significantly from normal.