Will generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to learn from its activities and in the process of self-improvement will learn from its own mistakes?
Can the possible future combination of generative artificial intelligence technology and general artificial intelligence result in the creation of a highly technologically advanced super general artificial intelligence, which will improve itself, which may result in its self-improvement out of the control of man and thus become independent of the creator, which is man?
An important issue concerning the prospects for the development of artificial intelligence technology and its applications is also the question of obtaining by the built intelligent systems taught to perform highly complex tasks based on generative artificial intelligence a certain range of independence and self-improvement, repairing certain randomly occurring faults, errors, system failures, etc. For many years, there have been deliberations and discussions on the issue of obtaining a greater range of autonomy in making certain decisions on self-improvement, repair of system faults, errors caused by random external events by systems built on the basis of generative artificial intelligence technology. On the one hand, if there are built and developed, for example, security systems built on the basis of generative artificial intelligence technology in public institutions or commercially operating business entities providing a certain category of security for people, it is an important issue to give these intelligent systems a certain degree of autonomy in decision-making if in a situation of a serious crisis, natural disaster, geological disaster, earthquake, flood, fire, etc. a human being could make a decision too late relative to the much greater speed of response that an automated, intelligent, specific security, emergency response, early warning system for specific new risks, risk management system, crisis management system, etc. can have. However, on the other hand, whether a greater degree of self-determination is given to an automated, intelligent information system, including a specified security system then the scale of the probability of a failure occurring that will cause changes in the operation of the system with the result that the specified automated, intelligent and generative artificial intelligence-based system may be completely out of human control. In order for an automated system to quickly return to correct operation on its own after the occurrence of a negative, crisis external factor causing a system failure, then some scope of autonomy and self-decision-making for the automated, intelligent system should be given. However, to determine what this scope of autonomy should be is to first carry out a multifaceted analysis and diagnosis on the impact factors that can act as risk factors and cause malfunction, failure of the operation of an intelligent information system. Besides, if, in the future, generative artificial intelligence technology is enriched with super-perfect general artificial intelligence technology, then the scope of autonomy given to an intelligent information system that has been built with the purpose of automating the operation of a risk management system, providing a high level of safety for people may be high. However, if at such a stage in the development of super-perfect general artificial intelligence technology, however, an incident of system failure due to certain external or perhaps also internal factors were to occur, then the negative consequences of such a system slipping out of human control could be very large and currently difficult to assess. In this way, the paradox of building and developing systems developed within the framework of super-perfect general artificial intelligence technology may be realized. This paradox is that the more perfect, automated, intelligent system will be built by a human, an information system far beyond the capacity of the human mind, the capacity of a human to process and analyze large sets of data and information is, on the one hand, because such a system will be highly perfect it will be given a high level of autonomy to make decisions on crisis management, to make decisions on self-repair of system failure, to make decisions much faster than the capacity of a human in this regard, and so on. However, on the other hand, when, despite the low level of probability of an abnormal event, the occurrence of an external factor of a new type, the materialization of a new category of risk, which will nevertheless cause the effective failure of a highly intelligent system then this may lead to such a system being completely out of human control. The consequences, including, first of all, the negative consequences for humans of such a slipping of an already highly autonomous intelligent information system based on super general artificial intelligence, would be difficult to estimate in advance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the possible future combination of generative artificial intelligence and general artificial intelligence technologies result in the creation of a highly technologically advanced super general artificial intelligence that will improve itself which may result in its self-improvement escaping the control of man and thus becoming independent of the creator, which is man?
Will the generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to draw conclusions from its activities and in the process of self-improvement learn from its own mistakes?
Will generative artificial intelligence in the future in the process of self-improvement learn from its own mistakes?
The key issues of opportunities and threats to the development of artificial intelligence technologies are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
Article OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL I...
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz