Hi Everyone,

I am working on machine learning and artificial intelligence for quite a few years. Over the years, I developed/trained so many black box models. Overall, the models seem to work but what they lack is explainability like that of linear regression models. Howerver, I am thinking to build explainable (may be not 100%) models so that we can increase the trust of the end user and make small adjustments to it. My question is whether something like that is possible? Any references will be appreciated.

Similar questions and discussions