If you have ever been a witness of a serious crime, you will understand that a witness is initially required to make a verbal statement of the criminal event along with a description of a suspect’s visual appearance; you then might be asked to identify the suspect from a lineup for which you must view different profiles of each face before making a positive or negative identification. Both language and face recognition are highly distributed functions in the neocortex of primates. In the case of language, studies have been ongoing for over a century and it is well-accepted that it is a highly networked process (involving at a minimum Wernicke’s and Broca’s areas) with every individual having a unique distribution of language content based on an individual’s learning history (Bloom and Markson 1998; Corkin 2002; Chomsky 2012; Everett 2017; Hebb 1949; Kimura 1993; Miller 1996; Ojemann 1991; Penfield and Roberts 1966;): whether they are mono- or poly-lingual, whether their speech is advanced or mono-syllabic, whether they write and read as well as they speak, whether the content of their language is general or expert, and so on. In short, we should never expect the stored content of language across individuals to be the same. It is for this reason that Chomsky’s observation (Chomsky per. com. 2008, colloquium at MIT) that ‘there is no consistency across individuals using fMRI for language’ is indicative of the unique configuration of language-storage between individuals such that the only commonality across subjects is that they have a posterior and anterior region in their brain (i.e., Wernicke’s and Broca’s areas) often located in the left hemisphere that subserves the linguistic process (Penfield and Roberts 1966). How the brain is filled with linguistic information can only be established by tracking the learning history of a person and then coming up with an estimate of total bits of information for language (see Footnote 1). As mentioned in previous communications, the neocortex of humans has a storage capacity of 1.6 x 10^14 bits, which is 2^(1.6 x 10^14) possibilities (Tehovnik, Hasanbegović, Tehovnik 2024); this should be more than enough capacity for 100 years of life (see Footnote 2).

So, what about face recognition? It is now believed that at least in primates a network of neurons located in the temporal cortex (mainly in the infratemporal portion) contains a chain of neuronal patches that respond to different profiles of a face such that at the anterior pole of the temporal lobes multiple profiles of a face are stored by individual neurons and that these neurons are connected to ventrolateral prefrontal cortex immediately anterior to the face representation of M1 (Brecht and Freiwald 2012; Bruce et al. 1981; Schwarzlose et al. 2005; Schwiedrzik, Freiwald et al. 2015; Freiwald and Tsao 2010). The cells in ventrolateral prefrontal cortex respond to facial gestures, thereby integrating the face cells of the temporal cortex with the cells in the frontal cortex that are responsible for evoking gestures (Romanski 2012). An obvious function here is to learn the various facial gestures of one’s species (through vision and the other senses) to thereafter enhance facial communication between conspecifics.

Frontal lobe aphasia and frontal lobe apraxia (caused by damage to the ventrolateral prefrontal cortex, i.e., Broca’s area—or areas 45 and 46—in humans) are two conditions that have been associated to suggest that verbal language has its roots in frontal lobe mechanisms that mediate the production of gestures in our monkey relatives (Kimura 1993), which parted from the Homo sapiens to-be line some 25 million years ago (Kumar and Hedges 1998). By computing the number of synapses of the facial network in monkeys and humans it should be possible to estimate the storage and transfer of facial information for the purpose of communication (see Tehovnik, Hasanbegović, Chen 2014) and by linking this information with the language centers in humans (which are partially overlapped, see attached Fig. 1; also see Fig. 2), a global information estimate could be deduced per individual to come up with language metrics per individual in bits and bits per second to see how well these correlate with one’s language aptitude. We are on our way to having a quantitative neuroscience that unifies the brain with behavior using information theory, as proposed about a decade ago [Tehovnik, E.J., 2014. Brain-machine Interfaces: Myths and Reality, Chilean Society for Neuroscience, Valdivia, Chile, October].

Footnote 1: Investigators may be able to eventually find a correlation between the amount of information stored and the amount of information transferred per individual; the latter for language is a fixed quantity of 39 bits per second, on average (Coupé et al. 2019), but we know there is variability between individuals based on educational level, developmental factors, and genetics.

Footnote 2: When comparing the neocortical/cerebellar information storage across animals, the capacity should be computed in terms of lifespan to establish how much residual capacity exists in particular species; the common assumption is that humans are a species outlier (through the invention of cooking being able to feed an energy-expensive brain, Herculano-Houzel 2011), for which one might expect a massive residual capacity for information storage, as well as an extreme information transfer ability, which depends on the musculature of an animal that sets an upper limit in the transfer rate (Tehovnik, Hasanbegović, Chen 2024). For example, the physicist Stephen Hawking with his disability could only transfer 0.1 bits per second for generating words using his cheek muscle (Tehovnik, Patel, Tolias et al. 2021).

Figure 1: A side view of the human brain is shown. Areas 45 and 44 (Broca’s area) receive information from both the posterior language area (Wernicke’s area and the auditory cortex) and the infratemporal cortex housing the neurons encoding objects such as faces in area 37.

Figure 2: A fMRI study of speech and gestures found that in the frontal lobes two regions were activated by either behavior: viewing a person performing gestures or listening to speech describing the gestures performed by the person. For both, the gestural and verbal presentations, areas 44 and 45 (Broca’s area) and area 47 were activated. In posterior neocortex, Wernicke’s area/the face region (area 37) was activated. Figure from Xu et al. (2009, Fig. 2).

More Edward J Tehovnik's questions See All
Similar questions and discussions