Machine Learning has seen widespread industry adoption in the last couple of years. One of the breakthrough adoption in machine learning is ability to explain or model interpretation which as a concept is still theoretical and subjective. There are still no clear boundry to decide which part of the AI model should be explained, and what should be the criteria to decide which part of the AI model should explained?

- should we explain the feature/s, what the extracted feature looks like?

- should we explain why ceirtain key features are selected for decision and other not?

- should we explain the model output like how the model take decision?

It could be important to explain all of them; but, do we have any boundry conditions to decide which to explain and to whom?

Similar questions and discussions