Following is my argument in response to the above questions:

There’s an undefined number of cases that may arise abruptly during the journey of an autonomous vehicle that require an immediate decision to minimize casualties. Hence, the problem is not just about how to program the vehicle to handle a few scenarios which represent ethical dilemmas. It’s about other scenarios which the vehicle hasn’t been programmed to respond to adequately, a severe limitation of an AI system which is incapable of adapting to new unplanned situations.

This applies also to the case of meta-learning, described as learning to learn, where the model is trained on a wide array of tasks with the aim of determining common patterns among these tasks to be ready to perform well with new tasks. Therefore, the response of the vehicle will be limited according to the finite number of tasks it has been trained on making it unable to encounter all possible variations of real world cases where lives might be at risk.

Accordingly, reaching solutions to such ethical dilemmas could end up to be just practicing mind sports. Moreover, developing sensors with higher efficiency would just allow for collecting more accurate data while the problem is about making a correct and timely decision based on these data in such unexpected situations. Finally, based on the previous discussion, in the case of an accident, the responsibility lies on the entities or legislations which allowed such limited systems to tackle such situations without ensuring proper human supervision.

More Mohamed el Nawawy's questions See All
Similar questions and discussions