Hacer que la IA sea explicable e interpretable implica diseñar y desarrollar modelos y sistemas de IA de manera que los usuarios, desarrolladores y otros interesados puedan entender cómo y por qué el modelo toma ciertas decisiones.
Some strategies are used to support model interpretability, such as simplifying the model to facilitate its interpretation, using “business sense” variables, analyzing data to identify biases or lack of fairness in the inputs that may hinder explainability, or analyzing model development or model implementation .
Inthe rapidly evolving world of Artificial Intelligence (AI) and Machine Learning (ML), the concepts of explainability and interpretability have gained significant traction. Explainability and interpretability are terms of immense value to any business leveraging AI/ML technology to generate predictions. As business and AI leaders navigate this landscape, it is crucial to grasp the nuances of these concepts and their implications for AI development and deployment.
Several approaches can be used to make AI explainable and interpretable: employing simpler models like decision trees, using post-hoc explanation techniques such as LIME and SHAP, identifying feature importance, utilizing visualization tools, extracting rules from complex models, providing interactive interfaces, and ensuring thorough documentation and reporting. These methods help bridge the gap between complex AI systems and human understanding, enhancing transparency and trust.
Some approaches are use to make AI explainable and interpretable are using
1. Linear models, logistic regression, and other simpler models, because these models show clear relationships between input features and output predictions. Decision tree is also important to provide a straightforward visual representation of decision-making process, making it easy to follo how the model reaches a conclusion.
2. Model-Agnostic models to improve its intepretability like Local Interpretable Model-agnostic Explanations (LIME), Shapley Additive exPlanations (SHAP)
3. Rule extraction techniques to transform complex models into a set of IF-THEN rules that are easier to interpret and also identifying patterns and relationships between variable in a dataset, makes complex models more transparent.
4. Using model debugging tools like What-if Tool from Google AI and interactive dashboards tools
Let us first distinguish between "explainable" and "interpretable" AI. Explainable AI refers to the ability of an AI system to provide explanations for its decisions or behaviour. Interpretable AI refers to the ability of an AI system to be understood and interpreted by humans.
You can say, that explainable AI is used by managers to decide whether to use support produced by the system or not to accomplish their task; interpretable AI is used by programmers to adjust the system according to the needs of the organisation and the task.
It depends heavily on the descriptors your are using and on the algorithms you are using. Artificial Neural Networks based models are inherently non-explainable (or very hard to do so) while other methods like Decision Trees allow for quick and easy explanations of the models.
This is something we have discussed in detail in some of our research, for example:
https://doi.org/10.1016/j.comptc.2024.114782
Read that and if you got any other questions feel free to DM me.