One of the most important aspects within AI-applications is the ethical one. We all are well acknoledged with some precautions (do not trust to facts, produced by AI; make AI to give self-explanations and reasoning; to avoid situations with non-defined authority in decision-making etc.). We also try to make empirical studies of "ethical borders", and how are they formed in education ( one of the later example is hereArticle ЭТИЧЕСКИЕ АСПЕКТЫ РЕАЛИЗАЦИИ ТЕХНОЛОГИЙ ИСКУССТВЕННОГО ИНТЕЛ...

) . But, in fact, do exist examples of systemic ethical consideration of AI for AI-users in different spheres? What is the ultimate (and also normative) reasoning for all these "to do or not to do"?
More Alexandre Bermous's questions See All
Similar questions and discussions