One way to make AI systems more interpretable is to use methods such as decision trees or rule-based systems that allow humans to understand how the system arrived at its decision. Another way is to use visualization techniques such as heat maps or scatter plots that allow humans to see patterns in the data.
To build accountability into AI systems, it is important to define the basic conditions for accountability throughout the entire AI life cycle — from design and development to deployment and monitoring — and lay out specific questions for leaders.