It depends on the type of distribution. You can try transformation techniques (square roots, logarithm, etc). It depends on what distribution you have. If it is close to a uniform one - try square roots. If it is skewed, try a logarithm.
If a method is indicating the data is not normal, some researchers might use other methods to re-assess the normality of the data in order to qualify them as normally distributed e.g. histogram, stem & leaf plot, boxplot, normal probability plot, detrended normal plot, skewness & kurtosis etc. beside Kolmogorov-Smirnov & Shapiro-Wilk statistics.
If all the above methods indicating the data are still not normal, perhaps you can use non-parametric statistics / tests.
Normally distributed data is a commonly misunderstood concept in Six Sigma. Some people believe that all data collected and used for analysis must be distributed normally. But normal distribution does not happen as often as people think, and it is not a main objective. Normal distribution is a means to an end, not the end itself.
Normally distributed data is needed to use a number of statistical tools, such as individuals control charts, Cp/Cpk analysis, t-tests and the analysis of variance (ANOVA). If a practitioner is not using such a specific tool, however, it is not important whether data is distributed normally. The distribution becomes an issue only when practitioners reach a point in a project where they want to use a statistical tool that requires normally distributed data and they do not have it.
Also you should know the reason behind the non-normality of data :
When data is not normally distributed, the cause for non-normality should be determined and appropriate remedial actions should be taken. There are six reasons that are frequently to blame for non-normality.
Reason 1: Extreme Values
Too many extreme values in a data set will result in a skewed distribution. Normality of data can be achieved by cleaning the data. This involves determining measurement errors, data-entry errors and outliers, and removing them from the data for valid reasons.
It is important that outliers are identified as truly special causes before they are eliminated. Never forget: The nature of normally distributed data is that a small percentage of extreme values can be expected; not every outlier is caused by a special reason. Extreme values should only be explained and removed from the data if there are more of them than expected under normal conditions.
Reason 2: Overlap of Two or More Processes
Data may not be normally distributed because it actually comes from more than one process, operator or shift, or from a process that frequently shifts. If two or more data sets that would be normally distributed on their own are overlapped, data may look bimodal or multimodal – it will have two or more most-frequent values.
The remedial action for these situations is to determine which X’s cause bimodal or multimodal distribution and then stratify the data. The data should be checked again for normality and afterward the stratified processes can be worked with separately.
My answer is, try to determine what the distribution is, and then you can begin to analyze the data.
As many esteemed colleagues have indicated, a normal distribution should not be assumed to apply to everything! Even if the distribution does not appear to be normal, there may be some other standard distribution that more closely applies. Such as, perhaps, Poisson.
You might try fitting several different types of distributions with your raw data, and then calculate how far off each one is, to your raw data.
Take at look at this article. It addresses exactly your question.
I endorse the argument of Waldemar Koczkodaj. If after certain test as mentioned by many scholars, still your data is not normal or in other words not normally distributed, it doesn't mean we get rid of data or try to transform it through whatever logs, algorithms or square roots, rather use non-parametric test. We don't know, it might be that such data is actually predicting more comprehensively.