“Machine learning tells us nothing!!!! It [human language] is too rich. … It [language] works as well with impossible languages as with regular languages. … [AI] is as useful as a bulldozer is useful. … It tells you nothing about mind and thoughts. … Computational complexity is what accounts for language. … [AI] will never provide [an] explanation to language. … Syntax gives language meaning…and syntax is totally independent of the external world [and is controlled by the brain].” (Chomsky, N., 2022, On Theories of Linguistics (Part 2), Dec. 30, 42.32 minutes, YouTube).
If Noam Chomsky is correct that his universal grammar, as controlled by the brain, is independent of the external world, then ChatGPT—which is based on vacuuming large amounts of data from the external world—will never be a suitable metaphor for human language. Furthermore, AI was not designed to provide meaning to its output and that it is merely a transmitter of an output that underwent a previous reconfiguration, i.e., a factor analysis that can act on its input.
A central issue for AI is the energy consumed to achieve intelligence. Currently the servers to train AI models are being built across the United States, and each server needs to be situated next to a high-density energy source such as a nuclear power station [which can generate a gigawatt of electricity—enough to power a midsized town, Buongiono 2024]. To create ChatGPT-3 required 10 gigawatt-hours (McQuate 2023) and to create DeepSeek required 1 gigawatt-hour (Ojha 2025). By comparison, the human brain is powered by 20 watts or 20 watts per hour (Balasubramanian 2021), most of which is utilized by neocortical neurons (Herculano-Houzel 2011). It can take a decade of schooling to become proficient at language which would require 1.7 million watt-hours, but much of this would also be used for brain growth (mitosis continues to the age of twelve: Sanai et al. 2011; Sorrells et al. 2018) and acquiring knowledge in addition to language. The parallel processing capability between the neocortex and the cerebellum facilitates human learning as discussed in earlier chapters. It is suspected that DeepSeek uses this principle, i.e., parallel processing between chips or GPUs (graphic processing units), to enhance the efficiency in training their AI model (DeepSeek-AI 2024a).
Animals, including humans, are autonomous and motivated to self-replicate. A programmer could install an autonomous and motivational code in an AI robot. At this point in time, if such a robot were transported to Mars to build a society for its community of offspring there would be no society and no offspring. Many rovers have been sent to Mars (i.e., Sojourner 1997; Spirit 2004; Opportunity 2004; Curiosity 2012; Perseverance 2021) but only one of these, Perseverance, is still functional and with no capacity to reproduce and build a society independent of humans. Ergo, if AI robots misbehave we will still hold the programmers responsible (Harari 2024), and the goal of absolute automaticity and spontaneous replication may never happen (see LeCun 2023).
Finally, it needs to be emphasized that consciousness depends entirely on neurons, a biological product. When Penfield and Jasper (1954) scooped out sections of the association cortices (to treat epilepsy), the remaining neocortical neurons filled in the missing pieces, much like the blind spot is filled in by the association cortex. And to get a patient to notice the missing pieces, neuropsychological tests need to be conducted to bring the missing pieces to the awareness of a patient, such as establishing the awareness of the blind spot to us through testing. When the fibers between the neocortex and cerebellum are transected at the level of the pons and thalamus all consciousness is lost (Plum and Posner 1980). The neocortex controls conscious processes, and the cerebellum establishes and stores well-learned psycho-motor routines as previously argued.