I attended a conference at DU named ICAS 2019 and there, Professor Dr. Wendy Martinez discussed data ethics. What should be our ethical decision in the case of AI?
Let's explain with an example. Suppose for an AI-oriented car, an accident is unavoidable, then what should be the decision from the following options?
1. Hit the barrier in the road which may damage the car and passengers.
2. Hit pedestrians?
It barely depends on the pedestrian's number, passenger number, age of passengers, age of pedestrians, barrier type, etc. things in case of a human-driven car. But for AI, what should be an ethical option?
In my perception, different countries have different ethical perspectives as culture, ethics differ from place to place. Then different AI devices may use that sort of decision that may suits that specific region. But for the decision, different AI making companies may follow some guidelines.
I would like to suggest some globally recognized organizations to keep this concern. First, they have to make some choice options for the questionnaire that arose and make a survey for the arisen issues. The survey should maintain a minimum number of participants to make the decision. Then the survey decision should be presented in front of a jury board of that specific region, who may finalize the decision for AI. There should be some specific law too for AI in that region. Then the algorithm should be finalized and applied.
(Note: It is obvious that there should be no such type of devices that may harm mankind. But you know, sometimes failure occurs and then we should follow that ethical statement.)
I am expecting your perception on this.