For a few years now, I have dedicated a part of my research and publications to the problem of hate speech on social networks. I am currently outlining a new article on that. There is something that makes me worried. It seems obvious that there are easily labelable and traceable hate speeches (especially in cyberspace with the help of automatic word processing software). Many published works provide today relevant data that allows detecting hate speech and identifying potentially criminal users masked under pseudo anonymity.
My concern is whether there would be another way to better approach hate speech without departing from purely scientific objectives, without contributing to the materialization of a kind of Linguistic Court willing to rule on the acceptability of expressions, perhaps in order to cleaning, fixing and giving “splendour” to digital language, improving, incidentally, the public image of social platforms and regulating the coexistence in cyberspace in such a way that only “good people” could participate ?
The truth is that, in addition to those speeches that explicitly express hatred, there is an untraceable, ungrammaticalizable hatred that resists both Logic and Empiricism. The question I ask you is if there is someone else who, like me, fears the risks of such practices of identifying "bad speeches" that could be used to purify the language, limiting freedom of expression? (sorry for my English, which is a language I use very rarely)