This is quite tricky and a lot of discussions are here: https://stats.stackexchange.com/
When it comes to statistical tests assuming a normal distribution, like t-test or ANOVA. You should take into account that they quite robust to non-significant violations of normality. And in cases of a normal or normal-like distribution they are more sensitive than non-parametric tests, so, in general, I think it's better to ignore slight deviation from normality rather than use a non-parametric test. However, in the case of n>50 parametric (with assumptions met) and non-parametric, most probably, will show the same result.
If you still want to check normality, you can use Shapiro-Wilk test. In R it would look like this: `shapiro.test(x)`. But I always recommend to do 3 things first:
1) Think about physical meaning of your data. Normal distribution is quite natural assumption in most cases.
2) Plot histogram to see distribution of values
3) Use Q-Q plot. See an example here: http://www.sthda.com/english/wiki/normality-test-in-r
Some texts (Quinn and Keough), publications (Tacha et al 1982), and statistical packages treat the evaluation of assumptions of homogeneity and normality as preliminary to statistical analysis of data. This points to evaluation of the response variable. While this can be a useful guide to initial choice of error distribution (normal, something else) it is short of sufficient. The assumptions for the general linear model (including regression, ANOVA, and ANCOVA) are that the errors (residuals) are normal and homogeneous (Eisenhart 1947, Seber 1966, Neter et al 1983 pp 31& 49, Quinn and Keogh 2002 pp 110 & 280).
In your case, the assumption is for normality of the residual deviations from the means *within* each group.
Evaluation of assumptions in advanced texts, where it occurs, often entails a residual versus fit plot (for homogeneity) and a normal score plot or Quantile-Quantile plot (for normality of the residuals).
The statistical literature warns against statistical tests to evaluate assumptions and advocates graphical tools (Montgomery & Peck 1992; Draper & Smith 1998, Quinn & Keough 2002). Läärä (2009) gives several reasons for not applying preliminary tests for normality, including: most statistical techniques based on normal errors are robust against violation; for larger data sets the central limit theory implies approximate normality; for small samples the power of the tests is low; and for larger data sets the tests are sensitive to small deviations (contradicting the central limit theory).
That preliminary evaluation is insufficient can come as a surprise, as can advice against statistical tests to evaluate assumptions. But I think best practice, as in the literature, needs to be said.
Good luck with your research,
~David S
Eisenhart, C. 1947. Biometrics 3:1-21
Läärä, E. 2009. Statistics: reasoning on uncertainty, and the insignificance of testing null. — Ann. Zool. Fennici 46: 138–157.
Neter, JW, MH Wasserman, MH Kutner.1983. Applied linear regression models.
Homewood Illinois, Richard D. Irwin, Inc.
Quinn, G and MJ Keough. 2002. Experimental Design and Data Analysis for Biologists. p 110, 280
Seber, GAF. 1966. The Linear Hypothesis: A General Theory. London, Griffin.
Tacha, TC, WD Warde, KP Burnham. 1982. Use and interpretation of statistics in wildlife journals. Wildlife Society Bulletin 10: 355-362.