Has the rivalry among leading technology companies in perfecting generative artificial intelligence technology already entered a path of no return and could inevitably lead to the creation of a super general artificial intelligence that will achieve the ability to self-improve, develop and may escape human control in this development? And if so, what risks could be associated with such a scenario of AI technology development?

The rivalry between IT giants, leading technology companies in perfecting generative artificial intelligence technology may have already entered a path of no return. On the other hand, there are increasing comments in the media about where this rivalry may lead, and whether this rivalry has already entered a path of no return. Even these aforementioned IT giants made attempts in the spring of 2023 to slow down this not-quite-tame development, but unfortunately failed. As a result, regulators are now expected to step in with the goal of sanctioning this development with regulations concerning, for example, the issue of including copyright in creative processes during which artificial intelligence takes on the role of creator. In the growing number of considerations regarding the use of artificial intelligence technology in various applications, in more and more spheres of human functioning, professional work and so on. there are questions about the dangers of this and attempts to powder the subject by suggesting that, after all, the development of AI technology and its applications cannot escape human control, that AI is unlikely to replace humans only assist in many jobs, that the vision of disaster known from the "Terminator" saga of science fiction films will not materialize, that human-like intelligent androids will never become fully autonomous, and so on. Or perhaps in this way, man is subconsciously trying to escape from other kinds of considerations, in which, for example, it could soon turn out that the technological advances taking place under Industry 5.0 driven by the entry of leading technology companies into a path of competition, which first will create a highly advanced super general artificial intelligence, which could turn out to be smarter than man, will be able to self-improve without man and develop in a direction that man will not even be able to imagine let alone predict beforehand. Perhaps the greatest fear of the consequences of the unbridled development of AI applications stems from the fact that the result of this development could be something that will intellectually surpass humans. Sometimes this kind of situation has already been referred to as an attempt to create one's own God (not an idol, just God). In these considerations, we repeatedly come to the conclusion that what is most fascinating can also generate the greatest dangers.

In view of the above, I address the following question to the esteemed community of scientists and researchers:

Has the competition among leading technology companies to perfect generative artificial intelligence technology already entered a path of no return and may inevitably lead to the creation of a super general artificial intelligence that will reach the capacity of self-improvement, development and may escape human control in this development? And if so, what risks could be associated with such a scenario of AI technology development?

Has the competition among leading technology companies to perfect generative artificial intelligence technology already entered a path of no return?

And what is your opinion about it?

What is your opinion on this issue?

Please answer,

I invite everyone to join the discussion,

Thank you very much,

Best wishes,

Dariusz Prokopowicz

The above text is entirely my own work written by me on the basis of my research.

In writing this text I did not use other sources or automatic text generation systems.

Copyright by Dariusz Prokopowicz

More Dariusz Prokopowicz's questions See All
Similar questions and discussions