This question addresses the broader implications of AI actions, focusing on the responsibility of developers to consider the societal impact of the tasks assigned to machines and the behaviours they encourage.
This is impossible because people themselves do not agree on such issues. In the case of the corona pandemic, various standards were quickly reversed. It suddenly became ethically acceptable to expose people to inadequately tested gene therapies without any evident data.
Gilbert Brands what computer scientists call ethics, I should like to use instead the notion of ideology.
Ethics is concerned with right and wrong, but ideology is concerned with whose value system it is that´s configuring what´s right or wrong.
So I guess you´re right, it´s impossible to agree on one ideology. That´s to say AI is never neutral and it´s always ideological. Therein lies the pitfalls of AI behind its shiny seductive allure, not so unlike Maria from Fritz Lang´s Metropolis.
Hmm.. I use these terms with a different meaning. Ethics does not depend on the field, but simply sets boundaries as to where to stop. Ideology is an irrational system of rules that contradicts reality.
For example, the claim that the mRNA Covid19 vaccines are real vaccines is an ideology because it contradicts the rules of immunology. The ethical limit is that no one should be vaccinated against their will. But that was put exactly to the opposite: vaccinations were forced through regulations. (to quote German Foreign Minister Baerbock, who is intellectually considered a minimum performer: a 360° turnaround)