Neural networks and deep learning have made remarkable advancements in various domains, but creating an entirely new language for communication poses intricate challenges. Several factors contribute to the absence of neural networks creating their own language:
Training Data Limitations: Neural networks rely on vast and diverse training data to discern patterns and generate meaningful output. Developing a new language necessitates a substantial corpus of well-structured data, which is currently unavailable. Amassing an extensive dataset for a new language is an immensely daunting task, limiting progress in this area.
Lack of Language Structure Understanding: Neural networks excel at pattern recognition and generating outputs based on existing data patterns. However, comprehending the intricate structure, syntax, and semantics of language presents a formidable obstacle. Teaching neural networks to grasp the complex rules and subtleties of language structure remains a considerable challenge.
Contextual Understanding: Language transcends mere word combinations and is profoundly influenced by contextual cues and cultural nuances. Neural networks encounter difficulties in comprehending and generating contextually appropriate responses. The generation of coherent and contextually accurate language necessitates a deeper understanding of human experiences and cultural references, which lies beyond the current capabilities of neural networks.
Creativity and Abstract Reasoning: Language creation demands creativity, abstract reasoning, and the capacity to generate novel and meaningful expressions. While neural networks can generate text based on existing patterns, they lack the inherent creativity and abstract reasoning abilities of human language creators.
Ethical Considerations: The creation of language entails ethical considerations concerning the purpose and potential consequences of the language itself. Allowing neural networks to autonomously create their own language without human oversight could lead to unintended outcomes or ethical dilemmas.
It is crucial to note that deep learning and neural networks primarily focus on specific tasks, such as image recognition, natural language processing, or speech synthesis. While advancements are being made in these areas, the development of a fully autonomous and creative language by neural networks continues to be an ongoing field of research, characterized by significant technological challenges.
If you look closely at what all serious AI/AGI people say, and even what serious cognitive psychology people say : they say they "need structure" for the understanding of what they believe they have already gotten ("structure" or something to that effect). And, the only good answer to that is : a TRUE hierarchical structure (such a "model", from direct observation OF [ at first, at their inception, overt ] PATTERNS and inductive work, seeing the bases for constructivism) -- and, GUESS WHAT, we cannot think this up with our minds OR with anything that seems like trial-and-error . You need empiricism and research where the Subject DEFINES ALL (such is real science). And, where does that come from? IT comes from using an approach that will yield true understanding of human ontogeny (child development with stages -- THAT will be A TRUE HIERARCHY, THE TRUE hierarchy !). And, how do you get a perspective and approach that provides a foundation (and direction) for SUCH (AND a MUCH BETTER foundation for psychology than psychology has today)? And, who has an APPROACH TO a new paradigm shifting foundation? I provide those Answers : see my papers and subsequent 800 pages of essays I have written (all on RG). Hey, that "too long"? Well, take it or leave it; and, so far, it has been left, because, in effect, both psychology people and analytic philosophers are too lazy -- they fail to establish understandings for themselves (that is via sufficiently verifying things for themselves) -- otherwise they would see the "something missing" and go to find, via the then obviously needed direct observation , what they need . Using "thinking" with ad hoc concepts and/or ANY jumping to a "model" of a hypothetico-deductive nature (or resulting from that) does NOT WORK. Those are more answers.
This gives you a true constructivist ("structure") for understanding AND gives you all the contexts you need * (also, given you have a good understanding of the good research on The Memories (note the plural -- when it is not seen as plural, it IS the same ad hoc and wrongful, premature, preemptive hypothetico-deductive "understanding" of the sort as indicated above - and the PERVASIVE problem in behavioral "sciences" today)).
A good understanding of the development and nature of abstraction (and abstract thinking) comes naturally from the same perspective and approach I have proposed (and indicated, above) "Neural networks primarily focus on specific tasks" (which 'tasks', whose 'tasks' ?)?
* FOOTNOTE : Patterns are within other existing patterns or emerge from them with the "help" of them (it's ontogeny, after all. Behavior PATTERNS per se are also of a clearly truly BIOLOGICAL NATURE and will follow applicable biological principles).
IN RECENT YEARS machines have learned to generate passable snippets of English, thanks to advances in artificial intelligence. Now they are moving on to other languages.
Aleph Alpha, a startup in Heidelberg, Germany, has built one of the world’s most powerful AI language models. Befitting the algorithm's European origins, it is fluent not just in English but also in German, French, Spanish, and Italian.
That's two years ago. It's been the focus for now, much as the focus is with your children, make them learn existing languages. As opposed to, make them invent their own.
So now, this might be steeped in philosophy, but my first take is to see how humans do sometimes end up inventing their own language. I'd say to start with that concept, then program a neural net to mimic that behavior.
The idea of a "twin language" (or "cryptophasia" if you want to get really fancy) has been around for some time now. It's been reported that up to 50% of young twins will have their own twin language--one which they use to communicate only with each other and one that can not be understood by others outside their little duo. The theory behind this "twin language" goes a little something like this: twins are so close to each other and rely on each other so much that they don't have as much of a need to communicate with the outside world, so they make up their own idiosyncratic language that develops only between the two of them. It's a fun and almost magical idea, for sure. But does it stand up to reality?
And it goes on to say, maybe not in most cases. Maybe it's just that twins don't learn phonetics properly, since they can communicate between themselves just fine anyway. But this is philosophy, so what claims are made are always suspect. And in fact,
This is not to say that parents of twins who have their own language should panic. There does seem to be a small percentage of twins who have both their own language and are able to communicate effectively with their parents in the "real" English language. These twins will switch back and forth between their own language and the English language, depending on who they are talking to. This type of "twin language" is not linked to later language delays. It's also, however, less likely to occur.
Being a twin myself, in our early years, yes indeed, we had our own language which only our slightly older brother could decipher. Investigate that phenomenon, see if neural nets can be made to use that same trial and error method, and I would not assume that neural nets cannot create a language.
Here's an older article, which shows AI creating an "interlingua," among existing languages, as a shortcut for translations. It accomplishes this by itself. Sort of a first step to what you're asking for, perhaps.
Google has previously taught its artificial intelligence to play games, and it's even capable of creating its own encryption. Now, its language translation tool has used machine learning to create a 'language' all of its own. ....
However, the most remarkable feat of the research paper isn't that an AI can learn to translate languages without being shown examples of them first; it was the fact it used this skill to create its own 'language'. "Visual interpretation of the results shows that these models learn a form of interlingua representation for the multilingual model between all involved language pairs," the researchers wrote in the paper.
An interlingua is a type of artificial language, which is used to fulfil a purpose. In this case, the interlingua was used within the AI to explain how unseen material could be translated.
"Using a 3-dimensional representation of internal network data, we were able to take a peek into the system as it translated a set of sentences between all possible pairs of the Japanese, Korean, and English languages," the team's blogpost continued. The data within the network allowed the team to interpret that the neural network was "encoding something" about the semantics of a sentence rather than comparing phrase-to-phrase translations.
"We interpret this as a sign of existence of an interlingua in the network," the team said. As a result of the work, the Multilingual Google Neural Machine Translation is now being used across all of Google Translate and the firm said multilingual systems are involved in the translation of 10 of the 16 newest language pairs.
Sort Answer: Language is a tool for conceptual beings that form, process (i.e., think) and communicate information conceptually. Artificial Intelligence is incapable (as of today) of forming or processing information conceptually--they can mimic making use of language for sure, like a very convincing parrot. Don't confuse parroting conceptual tools with thinking conceptually.
Why haven't they? I assume it's because no has assigned it as an AI project. If we were to do such a thing, what would we need? It has been observed that human twins will often develop their own language, so lets start with two identical computers with identical programming. Humans "enjoy" communicating (teaching, communicating thoughts and opinions, yada, yada, yada), so pleasure and pain sensations in the programming would come in handy. Lets go with the theme of pleasure when communicating, and mild pain after a period of time not communicating. In anticipation of the upcoming pain, computer 1 has to find something to communicate, so it accesses the internet for info, or maybe the surrounding environment (A form of curiosity?). A visual feed would be useful to identify objects in the environment. You can take it from here.
While neural networks excel at learning patterns and generating outputs based on existing data, creating a completely new language for communication requires a level of abstraction and conceptualization that current neural networks have not achieved. Language development involves complex cognitive processes, cultural and social influences, and shared understanding among users, which are beyond the capabilities of neural networks in their current state. While there have been advancements in natural language processing and generation, creating a truly novel language with its own rules and semantics remains an open challenge for AI research.
Artificial neural networks, inspired by the human brain, have undoubtedly achieved remarkable advancements in various tasks through deep learning. However, creating a language of their own involves a fundamentally different challenge. Language is a complex and nuanced system that emerges from the collective intelligence, experiences, and cultural context of human beings. It encompasses not only vocabulary and grammar but also semantics, pragmatics, and cultural nuances. While neural networks excel at pattern recognition and information processing, they lack the innate human qualities necessary for language creation, such as consciousness, subjective experiences, and socio-cultural understanding. Language is deeply intertwined with our humanity, emotions, and context, and these intricacies are not yet fully captured by artificial neural networks. Thus, while neural networks continue to evolve and push the boundaries of machine learning, the creation of a truly unique language remains an elusive feat for the time being.
We know largely NOTHING about human brain activity -- that is : what we "know" IS mostly the products of wild human imagination (and, truly taking that into account : you can know what I mean by "largely nothing"). AND: Given THE Memories and ALL passing through a limited working memory (needed for intentionality AT EACH CLEAR STEP), we WILL NOT BE ABLE TO KNOW. This is, unless we get control of ourselves. PRESENTLY : ALL is almost, in effect : NO-GO AT ALL : our traditional Western "thought with presumptions and ad hoc-s ARE NO GOOD.
[ Expect to have your post recommended and not mine -- but for NO sound logical reasons. (I "press-on" for you all (i.e. ALL), anyway. ]
They can, simply use autoencoders or similar methods. That will already turn any language to simplified variant of it. Problem is, if they will create it and we will not understand it, what's the point?
Follow-on question, how do you decode an invented, never seen before, with no obvious referent to real world objects, language? How do you know that a message in this "language" isn't a joke/noise meant to confuse rather than impart meaning?
Well, noise is a random pixel or symbol. If you have large text you can run analysis and check how many times each word is repeating. Naturally, if same word is in multiple places you know that it is not a noise but real word. I'm not sure but I think people do the same logic when dealing with ancient texts. Let's say take Etruscan which is ancient extincted language:
https://en.wikipedia.org/wiki/Etruscan_language
You can find meanings of words by searching how they repeated, where, compare with other languages which are similar and so one. Also paterns search can be applied in cryptography as well, take Caesar Cipher as an example. That is the reason why new reasearch works tries to make ciphers which have no paterns and everything looks like endless noise.
Andrius Ambrutis So an AI could create not just a language of its own but also a language indecipherable from noise. How would one AI communicate to another AI without a key?