Although the left side is a bit off, how close do you need it to be? If the left tail is important to your application, that will be off. But I think it is not often that you will see something more "normal." The main application for "normality" is likely due to the Central Limit Theorem where the distribution of means approaches normality with larger and larger sample sizes. You can use a t-value there. The asymmetry here, however, could be concerning, considering the apparently somewhat substantial sample size, whether this were for a sample from a population distribution, or a distribution of means. There appears to be enough data to be fairly certain of the asymmetry here, I think. But is it enough to cause you a problem in your application? You might try using this and compare the results to what you would have gotten with a sample from an artificially generated exactly normal distribution. If you could do a simulation, that might help, as a sample this size from a normal population will still be a little "off." Best wishes.
By the way, if your sample size is large enough, your p-value will always show your null hypothesis is off, but the question is, How much? Effect sizes matter. That is why a lone p-value is not helpful.
I don't think your skewness or kurtosis tell the real story. It's the asymmetry. That is a good example as to why graphics can be so helpful. Just looking at those statistics does not help you so much.
And remember, sample size matters, but what is big or small depends on the situation/application.
By the way, Said Ait Messaad, I think you have the p-value idea turned around. A low p-value would mean to reject the null hypothesis of a normal distribution. So that would be a "no" to normality. But that is the problem with p-values. They depend on both effect size and sample size. That is also true of an estimated standard error. That also changes with sample size. But it is more easily interpretable. And people tend to use a p-value inappropriately to say "Yes" or "No" to something, when it is a matter of degree. Here, as I explained, it depends on whether this case will serve your purposes. But you should never just use a p-value to say yes or no. You could just change your sample size to change your answer.
Here is a good example of why you want to know "How much is likely," not "Yes or no" as your goal:
There are hypothesis tests for heteroscedasticity in regression which people use to say "Yes or no" to the question "Is there heteroscedasticity in these residuals." Two problems with that: (1) This is not a matter of yes or no, but degree. Because predicted-yi varies in size, and is a size measure, sigma for the residuals should vary as the corresponding predicted-yi varies. Also, (2) What if you decide you need to account for heteroscedasticity? It is best to estimate the coefficient of heteroscedasticity and use that in the regression weights that help 'calculate' estimates for the regression coefficients, and impacts variance. Estimation is therefore more helpful.
Even though it may seem best for a study to be able to make such yes/no decisions, you can't let an hypothesis do that for you, at least not without setting a threshold p-value which is informed by power and sample size. That is, the practical effect size for your application matters. You have to make informed decisions. It is generally better to estimate measurements and decide on the basis of your situation. Often graphics can help. It may seem less arbitrary to just look at a p-value, but that is actually more arbitrary if you do not know the impact of sample size. Also, what are you measuring? In the example for this question/discussion, it seems from the graph that asymmetry is the most important feature to study. You should make your decision(s) accordingly.