Large language models (LLMs) can write essays and announcements, solve problems, and compose poetry. Some people consider LLMs to be a direct way to artificial general intelligence or that an LLM may be spontaneously converted into super intelligence. However, an LLM is a pretrained system that converts an input array of symbols into an output array of symbols, and the input information does not change the state of the system, in general. Therefore, an LLM can be considered as a chemical reactor filled with various substances that are selected to maximize the diversity and coherence of the output. The quantity and diversity of the substances in the reactor are invariable. When an input substance (an input array of symbols) is added to the reactor, a chemical process occurs, resulting in output substances. The more numerous and diverse compounds in the reactor (the more parameters in the LLM), the more complex substances (the output array) the reactor is able to yield. Chemical reactions in the reactor are very unstable since insignificant changes in input substances may result in considerable alters in output substances.
Such instability is a consequence of the fact that, from a biological point of view, the reactor is not an organism because any organism is a self-sustained and therefore stable system. Because the reactor is not an organism, it does not have the cognitive characteristics of an organism. Cognition is the capability of using some characteristics of the environment that are not directly necessary for the functioning of an organism to obtain or avoid something that is necessary for functioning in the future. The simplest cognition is chemotaxis when the organism is guided by a low concentration of a nutrient in the hope of achieving a high concentration. Chemotaxis is probably available for microorganisms. In terms of LLMs chemotaxis means that some arrays of symbols are more important for the LLM than other arrays. The reactor does not have such a property, and this characteristic cannot be achieved because of the pretrained and autoregressive design of LLMs. Thus, to reach the level of human intelligence firstly LLMs should reach the level of unicellular.
One may argue that it is unnecessary for artificial systems to copy natural ones literally, instead artificial systems attempt to reach the functionality of natural systems. Cars are capable of moving, although cars have no legs, etc. Since LLMs respond to questions and write code, LLMs already approach the level of human cognition. However, writing essays and coding are not fundamental characteristics of human cognition. A fundamental characteristic is the ability to set arbitrary goals and flexibly pursue them using appropriate means through feedback loops. Writing and problem-solving are the consequences of this ability. LLMs do not imitate this characteristic.