Work on Representation Learning (Bengio, Hinton), Self-taught Learning (Ng) , Transfer Learning (Yang), Lifelong Machine Learning (Silver, Thrun, Eaton), and Deep Learning Architectures (LeCun, Bengio, Ng) share a common interest in developing knowledge of the world from experience (training examples). For the most part, this knowledge has been used to improve the effectiveness or efficiency of future learning. Perhaps it is time to consider the learning of representation for use by broader AI systems that can reason and plan using learned knowledge as per some of the work in the Neural Symbolic Integration (Garcez,Lamb) community. Being able to reason with learned knowledge places an additional constraint on the representational language and search methods used by machine learning systems. This would seem to be informative in the move to create smarter robots and software agents. Have others being doing work on or have interest in this area?
For some additional thoughts please see
https://www.researchgate.net/publication/242025539_On_Common_Ground_Neural-Symbolic_Integration_and_Lifelong_Machine_Learning/file/e0b4951cc32892064b.pdf
Conference Paper On Common Ground: Neural-Symbolic Integration and Lifelong M...