Toward a New Test for AI Entities: The Growth and Realization of Autonomous Wisdom (GROW-AI) Test
By Alexandru TUGUI
In "Reflections on the LAI," published in 2004 in Ubiquity (now edited by Peter J. Denning), I posited that "Nature is very simple and efficient in everything she makes. We, humans, complicate things." The abstract elaborated: "Nature is very simple and efficient in everything she makes, and is extremely obvious. We humans like to simulate in an extremely complicated manner what exists quite simply in Nature, and what we succeed in simulating falls in the category of artificial intelligence." (source here).
This 2004 paper aimed to outline several key limitations of artificial intelligence:
1. "Artificial intelligence must take into account the law of entropy."
2. "The entire foundation of artificial intelligence is based on informatic procedures," addressing the critical role of algorithms in intelligent entities.
3. "The two pillars of computer science, '0' and '1,' together with the truth values 'True' and 'False,' are major borders in artificial intelligence."
4. "Artificial intelligence relies heavily on symbolic logic and has not succeeded in incorporating so-called affective logic."
Additionally, the paper asked, "When will a computer 'grow up?'," suggesting a future where robots might replace traditional computers.
Three years before the surge of LLMs in our society, in 2019, I revisited AI's limitations from a scientific perspective, concluding that humans represent a dual biological limit to artificial intelligence. Unaware of the near-future developments of 2022, the conclusion then was: "We registered on the Bio-Tech-Bio-AI path the inability of the individual regarding self-understanding (the first biological limitation) in order to develop theories about natural intelligence, which induces a limitation on the creation and use of technologies appropriate to the process of imitation/mimetization of the biological system (the second biological limitation) in order to design what we should have understood, i.e., artificial intelligences."
In March 2023, after the release of ChatGPT and other LLMs, I applied an "oriented self-literature review" (a methodology proposed to self-critique and update one's ideas), and I prepared a presentation for ICECCME 2023 (International Conference on Electrical, Computer, Communications and Mechatronics Engineering, 19-21 July 2023, Tenerife, Canary Islands, Spain). In this presentation, I re-analyzed the limits proposed in the earlier works (Tugui, 2004; Tugui et al., 2019) and updated these AI limits, particularly for humanoid robots (Tugui, 2023), as follows:
1. The limit imposed by the laws of entropy and gravity.
2. The limit of algorithms to mimic human behavior against the backdrop of the two pillars of computer science, '0' and '1.'
3. The limit due to the inability to fully integrate affective logic alongside the challenge of replicating human sensory experiences.
Even without confirmation in March 2023 that ChatGPT had passed the Turing test (which was confirmed on July 25, 2023, when Biever (2023) explicitly stated that ChatGPT broke the Turing test — the race is on for new ways to assess AI), I highlighted in Tugui (2023) another crucial test for intelligent entities, especially humanoid robots: "to grow up on their own," both physically and intellectually.
This test would be the ultimate benchmark for any intelligent entity. Hence, we propose naming it the Growth and Realization of Autonomous Wisdom (GROW-AI).
The GROW-AI Test
GROW-AI is designed to evaluate the capabilities of intelligent entities in terms of their ability to grow, evolve, and achieve advanced wisdom autonomously. Key criteria for this test include:
1. Autonomous Physical and Intellectual Growth:
- The entity must demonstrate the ability to change and evolve in its physical form or capabilities without direct human intervention.
- This includes accumulating qualitative knowledge and independently improving skills.
2. Understanding and Controlling Entropy and Gravity:
- The entity must show the ability to understand and manipulate entropy and gravity to its advantage.
- This involves performing complex physical tasks and optimizing movements, considering gravitational and entropic constraints.
3. Efficient Software Algorithms:
- The entity must use efficient algorithms that reliably emulate human behavior.
- This includes effectively responding to stimuli and interactions and imitating human-like sensory and emotional logic.
4. Sensory and Affective Logic:
- The entity must possess advanced sensory processing and emotional reasoning capabilities.
- This involves interpreting and reacting to sensory data in ways that reflect human-like affective states.
5. Self-Assessment:
- The entity must be capable of self-assessment and autonomous evaluation of its performance and development.
- This includes identifying areas for improvement and adapting or reprogramming itself accordingly.
6. Advanced Autonomous Wisdom:
- The entity must demonstrate high levels of autonomous wisdom, including ethical reasoning, decision-making, and learning from experiences.
- This includes planning long-term, solving creative problems, and adapting to new and unforeseen challenges.
References:
Tugui, A. (2004). Reflections on the limits of artificial intelligence. Ubiquity, 5(December), 2-6.
Tugui, A., Danciulescu, D., & Subtirelu, M. (2019). The biological as a double limit for artificial intelligence: Review and futuristic debate. International Journal of Computers, Communications and Control, 14(2), 253-271. https://doi.org/10.15837/ijccc.2019.2.3536
Tugui, A. (2023). Limits of humanoid robots based on a self-literature review of AI's limits. In 2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME) (pp. 1-4). Tenerife, Canary Islands, Spain. https://doi.org/10.1109/ICECCME57830.2023.10252993
Biever, C. (2023). ChatGPT broke the Turing test — the race is on for new ways to assess AI. Nature, 619, 686-689. https://doi.org/10.1038/d41586-023-02361-7