How should the development of AI technology be regulated so that this development and its applications are realized in accordance with ethics?

How should the development of AI technology be regulated so that this development and its applications are realized in accordance with ethics, so that AI technology serves humanity, so that it does not harm people and does not generate new categories of risks?

Conducting a SWOT analysis of the applications of artificial intelligence technology in business, in the business activities of companies and enterprises, shows that there are both many already and developing many more business applications of the said technology, i.e., many potential development opportunities are recognized in this field of using the achievements of the current fourth and/or fifth technological revolution in various spheres of business activity, as well as there are many risks arising from inappropriate, incompatible with the prevailing social norms, standards of reliable business activity, incompatible with business ethics use of new technologies. Among some of the most recognized negative aspects of improper use of generative artificial intelligence technology is the use of AI-equipped graphic applications available on the Internet that allow for the simple and easy generation of photos, graphics, images, videos and animations that, in the form of very realistically presented images, photos, videos, etc., depict something that never happened in reality, i.e., they graphically present images or videos presenting what could be described as “fictitious facts” in a very professional manner. In this way, Internet users can become disinformation generators in online social media, where they can post the said generated images, photos, videos, etc. with added descriptions, posts, comments, in which the said “fictitious facts” presented in the photos or videos will also be described in an editorially correct manner. Besides, the mentioned descriptions, posts, entries, comments, etc. can also be edited with the help of intelligent chatbots available on the Internet like Chat GPT, Copilot, Gemini, etc. However, misinformation is not the only serious problem as it has significantly intensified after OpenAI released the first versions of ChatGPT chatbot online in November 2021. A new category of technical operational risk associated with the new AI technology applied has emerged in companies and enterprises that implement generative artificial intelligence technology into various spheres of business. In addition, there is a growing scale of risks arising from conflicts of interest between business entities related to not fully regulated copyright issues of works created using applications and information systems equipped with generative artificial intelligence technology. Accordingly, there is a demand for the development of a standard of a kind of digital signature with the help of which works created with the help of AI technology will be electronically signed, so that each such work will be unique, unrepeatable and whose counterfeiting will thus be seriously hampered. However, these are only some of the negative aspects of the developing applications of AI technologies, for which there are no functioning legal norms. In the middle of 2023 and then in the spring of 2024, European Union bodies made public the preliminary versions of the developed legal norms on the proper, business-ethical use of technology in business, which were given the name AI Act. The legal normatives, referred to as the AIAct, contain a number of specific, defined types of AI technology applications deemed inappropriate, unethical, i.e. those that should not be used. The AIAct contains classified according to different levels of negative impact on society various types and specific examples of inappropriate and unethical use of AI technologies in the context of various aspects of business as well as non-business activities. An important issue to consider is the scale of the commitment of technology companies developing AI technologies to respect such regulations so that issues of ethical use of this technology are also defined as much as possible in technological aspects in companies that create, develop and implement these technologies. Besides, in order for AIACT's legal norms, when they come into force, not to be dead, it is necessary to introduce both sanction instruments in the form of specific penalties for business entities that use artificial intelligence technologies unethically, antisocially, contrary to AIAct. On the other hand, it would also be a good solution to introduce a system of rewarding those companies and businesses that make the most proper, pro-social, in accordance with the provisions of the AIAct, fully ethical use of AI technologies. In view of the fact that AIACT is to come into force only in more than 2 years so it is necessary to constantly monitor the development of AI technology, verify the validity of the provisions of AIAct in the face of dynamically developing AI technology, successively amend the provisions of the said legal norms, so that when they come into force they do not turn out to be outdated. In view of the above, it is to be hoped that, despite the rapid technological progress, the provisions on the ethical applications of artificial intelligence technology will be constantly updated and the legal normatives shaping the development of AI technology will be amended accordingly. If AIAct achieves the above-mentioned goals to a significant extent, ethical applications of AI technology should be implemented in the future, and the technology can be referred to as ethical generative artificial intelligence, which is finding new applications.

The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:

OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT

Article OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL I...

In view of the above, I address the following question to the esteemed community of scientists and researchers:

How should the development of AI technology be regulated so that this development and its applications are carried out in accordance with the principles of ethics?

How should the development of AI technology be regulated so that this development and its applications are realized in accordance with ethics?

How should the development of AI technology applications be regulated so that it is carried out in accordance with ethics?

What do you think about this topic?

What is your opinion on this issue?

Please answer,

I invite everyone to join the discussion,

Thank you very much,

Best regards,

Dariusz Prokopowicz

The above text is entirely my own work written by me on the basis of my research.

In writing this text, I did not use other sources or automatic text generation systems.

Copyright by Dariusz Prokopowicz

More Dariusz Prokopowicz's questions See All
Similar questions and discussions