See Figure 1:

Figure 1: Neocortical-cerebellar loops per language. A language command to speak is issued from the neocortex (cortex) which has access to the cerebellar cortex at the Purkinje cells (Purkinje) via the pons (Pons). The return portion of the loop passes through the cerebellar nuclei (Nuclei) and thalamus (Thal) en route to the neocortex. According to Ojemann (1983,1991) every language is stored separately in the neocortex according to electrical inactivation experiments done on human subjects. This idea is supported by work on stroke patients, whereby primary languages are often preserved over secondary languages (based on cerebellar stroke, Mariën et al. 2017) and it is well-known that doing language interpretations in real time is difficult, likely because of the neural segregation between the language loops. Irrespective of the type of language, every language is transmitted at a comparable bit-rate of 39 bits per second, which is short of 1 trillion possibilities per second (Coupé et al. 2019). This dovetails with the idea of Chomsky that all humans have a universal-grammar capability (Chomsky 1965) further supporting the idea that we are one species. Whether other mammals have a similar capability is not known, but machine learning is now being used to decipher communications between whales, which have neocortical neurons in excess of 40 billion, e.g., the killer whale (Ridgway et al. 2019; humans have 16 billion neocortical neurons by comparison, Herculano-Houzel 2009). The loop configuration in the figure is based on the anatomical, unit recording, and optogenetic experiments of Hasanbegović (2024), all performed on the mouse and generalized to the primate (Thach et al. 1992).

More Edward J Tehovnik's questions See All
Similar questions and discussions