Can the responses of intelligent chatbots available on the Internet be “arbitrarily programmed in algorithms,” biased and predetermined by their creators, or are they rather a statistical resultant of the data on which they were trained?
The prevailing opinion on the subject is that the advanced language models currently under development, such as ChatGPT, Copilot and other intelligent chatbots, are based on artificial intelligence algorithms that learn by analyzing vast amounts of text and data. These algorithms are not “preloaded” with specific views or worldviews of their creators, but rather are trained on data that reflects the diversity of thoughts, ideas and perspectives present in society. In practice, this means that the views and values expressed by such models are the resultant of the data on which they were trained, rather than being directly derived from assumptions imposed by their creators. Thus, language models do not have a built-in “worldview,” but may reflect or reproduce dominant narratives, biases and patterns that are present in the datasets on which they were trained. But what is your opinion on this topic?
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
Article OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL I...
Please write what you think in this issue? Do you see rather threats or opportunities associated with the development of artificial intelligence technology?
What is your opinion on this issue?
And what is your opinion on this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz