LLM, Large Language models have become increasingly popular in recent years, capturing more and more new areas of application. Obviously, these tools can improve labor efficiency, although their usage does not always contribute to quality. However, alarm bells have been increasingly heard recently.
LLM is an extremely powerful tool, capable of “solving” problems and “answering” questions formulated directly by the user. And if the meaningful usage of such a tool in professional hands increases productivity, then using it in the learning process to solve problems raises concerns. Thus, checking students' reports on “scientific” practice, signs of the use or influence on thinking of language models were observed if not in all, then in almost all works: abuse of lists, paragraphs of one sentence (read, reformatted list), inability to write a coherent text, even with samples. And education (higher, first of all) is training in the ability to solve problems: scientific, engineering (or other profiles), and, finally, social. And if the social component has already suffered during the Covid remote work, then the scientific and specialized component begins to suffer from LLM: the ability to quickly get an answer and/or instructions for solving it (don't deceive anyone: the programs and tasks at universities are quite typical) does not contribute to the desire to spend effort and time on solving it independently. As a result, such an availability of LLM assistance in solving simple problems leads to the practical impossibility of moving on to independently solving complex problems, the answers to which LLM does not provide, and most likely will never provide.
Thus, the emergence of LLM is not only a potential leap in productivity and automation of labor in various fields (the scale of which we have yet to assess), but also a real Challenge for the Education system, both higher and specialized in all areas working with information, and this is not only IT. The solution to this challenge will probably be very painful, because the improvement of LLM and recent well-known cases of scandals with deprioritization due to false positives of the machine text detector in various fields (for example, essays of foreign applicants and students) indicate that simply “banning” is not an option. This means that the education system will have to learn to teach how to use LLM wisely, and it is not at all obvious that this can be done 'head-on'.
Therefore the "Question". What should we do to avoid the Gap in problem-solving skills in the future generations?