Training a machine solely based on memorizing correct answers is not a robust approach for several reasons:
Limited Generalization: If a machine only memorizes specific examples, it won't be able to generalize well to new, unseen situations. True intelligence involves the ability to apply knowledge to novel scenarios and make reasonable predictions or decisions.
Lack of Adaptability: Memorization doesn't allow machines to adapt to changes or variations in the data. If the environment or circumstances shift slightly, a memorization-based system may fail because it doesn't understand the underlying principles or patterns.
Inefficiency: Memorizing all possible correct answers is often impractical or impossible due to the vast and continuously evolving nature of real-world data. This is particularly true for tasks that require a deep understanding of language, context, or abstract concepts.
Contextual Understanding: Machine learning models benefit from learning the relationships and context within data. This enables them to make informed decisions rather than relying on rote memorization. Language models, in particular, benefit from understanding the context in which words and phrases are used.
Scalability Issues: Even if it were possible to memorize a large dataset, the scalability of such an approach is limited. It's not feasible to memorize all possible data points, especially in dynamic and complex domains.