Lifelong Machine Learning, or LML, considers systems that can learn many tasks from one or more domains over its lifetime. The goal is to sequentially retain learned knowledge and to selectively transfer that knowledge when learning a new task so as to develop more accurate hypotheses or policies. I believe that knowledge representation will play an important a role in the development of LML systems. More specifically, the interaction between knowledge retention and knowledge transfer will be key to the design of LML agents. Lifelong learning research has the potential to make serious advances on a significant AI problem - the learning of common background knowledge that can be used for future learning, reasoning and planning. The work at Carnegie Mellon University on NELL is an early example of such research (Carlson et al. 2010). For a recent paper on this topic, please see the attached paper presented at the recent AAAI Spring Symposium on LML entitled "Lifelong Machine Learning Systems: Beyond Learning Algorithms".

What are you thoughts on this topic?

Similar questions and discussions