Hi, I have a set of data 8 health domains of SF12, all of them have skewness and kurtosis index between -1 to 1 but Kolmogorov and Shapiro Wilk are significant. So is my data normally distributed?
Normal distributions range from -infinity to +infinity. Furthermore, they do not exist in the real world, if you believe George Box. So no, your (presumably finite) sample is not normally distributed.
I suppose you asked that question because you want to carry out some kind of analysis, and have been told that the data must be normally distributed in order to do so. It will help readers to help you if you provide some more information. For starters,
What are your research questions?
What type of analysis would you like to use to address those questions?
What is the sample size?
Is it sensible and defensible to use means and SDs to describe your data?
Another way is to generate a histogram. A graphical representation of the distribution of a dataset.
You can also conduct a chi-square test. A graphical method for comparing the distribution of a dataset with a theoretical normal distribution is Q-Q Plot. A statistical test for checking the normality of data is Shapiro-Wilk Test
You ask: "So is my data normally distributed?" - The answer is: no. Always. Your data has some empirical (frequency) distribution, but the normal distribution is a theoretical (probability) distribution model. These are very different things. Of course you may ask is each of your sample values may be assumed being a realization of a random variable with a normal distribution, possibly sharing the same expectation and variance (your sample is assumed being a realization of a normal random vector).
Answering this question using hypothesis tests like KS or Shapiro-Wilk is pointless. These tests compare the data to the model and give you a p-value that measures the statistical incompatibility of the data with the model. This is a function of (also) the sample size, and like any other hypothesis test it does not(!) answer the question whether or not the tested hypothesis is true but rather if the information provided by the data (what increases with the sample size) is sufficient to reject the tested hypothesis. So there are two possible results from such a test, and both are useless: (i) the test is not significant: the test was not sufficiently powered to reject the hypothesis and you still don't know if the distribution assumption is reasonable because the test might not have been able to detect relevant kinds of incompatibilities; (ii) the test is significant: the test was sufficiently powered to reject the hypothesis and you still don't know if the distribution assumption is reasonable because the detected kinds or severity of incompatibilities may not be relevant. In either case the conclusion is: you don't know. So doing such a test is pointless.
If the distributional assumption makes sense follows solely from your understanding of the data-generating process (what is measured, how is it measured). If this understanding leads you to the belief that the normal distribution would be a reasonably good model, you may still confirm this by looking at some diagnostic plots (e.g. normal-Q-Q). If you see a striking pattern there you should rethink your assumptions. This is of course also a kind of a "test of normal distribution", but based on common sense and understanding and not on trying to reject a statistical hypothesis.