It seems that the recent breakthrough in LLM has created a new situation when humans can talk to AI using natural language. However, there are still many questions about how these models understand the language, how they learn, generate the response and build predictions. I am wondering if we can say that the novel chat GPT-3 understands the meaning of the words and if it is possible to prove (from the philosophical or technical point of view) that it does (or does not) understand the meanings of the words/ sentences/paragraphs it generates.