How should a system of institutional control over the development of advanced artificial intelligence models and algorithms be built so that this development does not get out of hand and lead to negative consequences that are currently difficult to predict?
Should the development of artificial intelligence be subject to control? And if so, who should exercise this control? How should an institutional system for controlling the development of artificial intelligence applications be built?
Why are the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc. now calling for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology is fully under control and does not get out of hand?
To this, should the development of artificial intelligence be under control? - the answer is probably obvious, i.e. that it should. What remains debatable, however, is how the system of institutional control of the development of advanced artificial intelligence models and algorithms should be structured so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee. Besides, if the question: should the development of artificial intelligence be controlled? - is answered in the affirmative, i.e. YES, then who should exercise this control? So, how should an institutional system of control over the development of advanced artificial intelligence models and algorithms and their applications be constructed, so that the potential and real future negative effects of dynamic and not fully controlled technological progress do not outweigh the positive ones. Well, at the end of March this year 2023, a number of new technology developers, artificial intelligence experts, besides businessmen, investors developing technology start-ups, including, among others, Apple co-founder Steve Wozniak and the founder or co-founder of such technology companies as PayPal, SpaceX, Tesla, Neuralink and the Boring Company, i.e. Elon Musk, Stability AI chief Emad Mostaque (maker of the Stable Diffusion image generator) and artificial intelligence researchers from Stanford University, Massachusetts Institute of Technology (MIT) and other AI universities and labs have called in a joint letter for at least a six-month pause in the development of artificial intelligence systems more capable than the GPT-4 published in March. The aforementioned letter acting as a kind of cautionary petition was published on the Future of Life Institute centre's website, advanced artificial intelligence could represent "a profound change in the history of life on Earth" and the development of this technology should be approached with caution. In this petition of sorts, there are warnings about the unpredictable consequences of the race to create ever more powerful models and complex algorithms that are key components of artificial intelligence technology. The aforementioned developers of leading technology companies suggest that the development of artificial intelligence should be slowed down temporarily, as the risk has now emerged that this development could slip out of human control. The aforementioned petition warns that an uncontrolled approach to AI development risks a deluge of disinformation, mass automation of work and even the replacement of humans by machines and a 'loss of control over civilisation'. In addition, the letter suggests that if the current rapid development of artificial intelligence algorithm systems gets out of hand, then the scale of disinformation on the Internet will increase significantly, the process of work automation already taking place will accelerate many times, which may lead to the loss of jobs for about 300 million people within the current decade and, as a consequence, may also lead to a kind of loss of human control over the development of civilisation. Developers of new technologies point out that advanced artificial intelligence algorithm systems should only be developed when the development of artificial intelligence is under full control, the effects of this development are positive and the potential risks are fully controllable. Developers of new technologies are calling for a temporary pause in the training of systems superior to OpenAI's recently released GPT-4 system, which, among other things, is capable of passing tests of various kinds at a level close to the best results passed by humans. The aforementioned letter also calls for the implementation of comprehensive government regulation and oversight of new models of advanced AI algorithms, so that the development of this technology does not overtake the creation of the necessary legal regulations.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Why do the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc., now call for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology takes place fully under control and does not get out of hand?
Should the development of artificial intelligence be controlled? And if so, who should exercise this control? How should an institutional control system for the development of artificial intelligence applications be built?
How should a system of institutional control of the development of advanced artificial intelligence models and algorithms be built, so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee?
What do you think?
What is your opinion on the subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz