Dear researchers,

https://en.wikipedia.org/wiki/Explainable_artificial_intelligence

" Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI is an implemention of the social right to explanation... Modern complex AI techniques, such as deep learning and genetic algorithms are naturally opaque"

All human learning is based on explanations of knowledge. This way knowledge transfers from teacher to student. How AI black-box can transfer knowledge to human person? Why we should believe AI, if it can not explain its decisions like humans do?

If we decide that AI is just one more math branch, why it is discussed as intelligence like human so enthusiastically?

Best regards,

Konstantin M. Golubev

Similar questions and discussions