I would like to know which is the best Natural Language Software to recognize the part of speech with small parentage of errors. I have used Stanford CoreNLP but It some time came out with errors.
There is not perfect tool. You don't even get perfect results if you do it by hand ;-) (see e.g., https://en.wikipedia.org/wiki/Part-of-speech_tagging#Issues) The error rate depends on the domain, the genre, maybe even orthographical errors in the texts you are tagging. And you should ask yourself if you really need a tagger with a lower error rate than the Stanford tool your are using. Some problems are structural (see links above) and occure on a regular basis. Others are occuring in single cases. If your are using a large enough corpus, these error should not be a big problem.
You should just try out different tools (what language are you tagging? English?). There are many out there, e.g., the TreeTagger, the mentioned Stanford NLP toolset, and at the end of the Wikipage I linked above, you find a quite large number of tools. Then you can decide what tool comes closest to what you consider the correct POS tagging.
MElt is not bad (models for French, English, Spanish and Italian ), a little better than treetagger for French https://www.rocq.inria.fr/alpage-wiki/tiki-index.php?page=MElt
Written in Python (+numpy) (+perl for embedded tokenizer)