Transparency is one of the most frequent catch words when discussing ethics in relation to AI and machine learning. The background is that the responses delivered by learning machines have been argued to be a delivery out of a 'black box'. But is there any work going on with the focus on making the process more transparent (I assume something like making AI systems not only produce answers or recommendations, but also explaining and supplying support for the solutions presented)?

Similar questions and discussions