What do we need tools that automatically recognize language in test document for? Especially if we are humanists or linguists doing work or research? I mean tools like https://pypi.python.org/pypi/langdetect or https://detectlanguage.com/
I am a phonetician and I would use them in order to perform an automatic analysis or annotation. Let's say that I want to phonetically transcribe all messages that arrive to my web and they can arrive either in Spanish or Catalan. The transcriber that I use needs to know the language before begining the transcription because "vowel+s + vowel" should be transcribed [s] in Spanish but [z] in Catalan and I don't want to review every message I get for checking the language since I want the transcription to be performed in real-time. Or maybe giving a personalized automatic answer in the language that people talk to you.
Therefore, it is necessary for every language-specific analysis or transcription tool and for automatic translation and dialogue systems.
For actual texts, I fancy the idea of having a "language detector" where you put a piece of language from a 12th book and the system tells you: this was written in old Spanish from the 12th century :D
You need them to be able to objectively analyse the tonnes of texts you have to analyse in your corpus linguistics studies and discourse analysis research. you need them for "text mining" on large parallel and comparative corpora in terminology research. there are even tools to help you pictorialize texts. I knew much more of these during a recent a recent Digital Humanities (DH) Workshop at the University of Lagos in Nigeria. Fortunately for you DH is very much developed in Germany, quite close to you over there in Poland.