No data is normal distributed. The normal distribution is an ideal, a model. That model can be a more or a less good approximation to describe a data distribution. The problem is to judge when the approximation would be that bad that conclusions based on mathematical procedures that are assuming a particular distribution model would be relevantly astray. Additionally, the sample size plays a role, as Ayman stated (the larger the sample, the less impact will a bad approximation have).
But before moaning that the distribution is not normal, the first step should be to think which distribution model should be adequate for the kind of data at hand. For instance, if one has count data, typically the Poisson or the negative binomial model are adequate choices. For life-times, exponential or Weibull models are usually resonable etc. Then a statistical analysis based on these distribution models should be used. Only if one can't find or doesn't know an appropriate model, the normal model is the "fall-back" solution - and if this does not fit to the data, one may consider transformations, as Violeta mentioned.
There are non-parametric alternatives for certain tests, like Mann-Whitney to t-tests and Kruskal-Walis for ANOVAs. You can also use transformations for your data, which would make it possible to use parametric tests. So, of course skewed data is accepted in behavioral and social sciences!
No data is normal distributed. The normal distribution is an ideal, a model. That model can be a more or a less good approximation to describe a data distribution. The problem is to judge when the approximation would be that bad that conclusions based on mathematical procedures that are assuming a particular distribution model would be relevantly astray. Additionally, the sample size plays a role, as Ayman stated (the larger the sample, the less impact will a bad approximation have).
But before moaning that the distribution is not normal, the first step should be to think which distribution model should be adequate for the kind of data at hand. For instance, if one has count data, typically the Poisson or the negative binomial model are adequate choices. For life-times, exponential or Weibull models are usually resonable etc. Then a statistical analysis based on these distribution models should be used. Only if one can't find or doesn't know an appropriate model, the normal model is the "fall-back" solution - and if this does not fit to the data, one may consider transformations, as Violeta mentioned.
" The problem is to judge when the approximation would be ... relevantly astray. "
This may happen when the model can not predict the extreme values of a univariate dataset. It is better to have a linear -and probably wrong- cummulative distribution model that fits the 2 extreme points than a normal model that fits nothing, based in so many premises, complex math expressions, and terms (about media, median, mode, asymptotic at infinite, simetry by decree, constant standard deviation, and theorems of central laws about samples of infinite sizes). I agree that the model must describe the available data as an approach of interpretation. Thanks, Emilio Chaves