In a very general sense, I had tried to look at this very basic problem in a  recent proposal, Entropy Driven Deep Learning. an abstract given below may provide an overview, let wait for your comments. Also, a new proposal coming in few days time will bring up the subject:

The present proposal investigates a novel visual machine and address issues in highly controversial areas of computer vision, such as eye movements, attention and deep learning. Our research studies, centers on very basic but fundamental question challenging most existing models: encoding differences between intensively looking and casual glancing in learning and recognition? This is a hard problem going beyond visual objects physical boundaries and surpass feature maps. We believe, visual invariance need additional fuzzy/relativity evaluations reshaping trends of innovations, especially in proactive vision. However, we are happy to have access to studies of human vision as a general road map. Yet, it does create new challenges to existing systems demanding greater adaptation capacity, a human like efficiency for interaction with surrounding environment. We examine a plausible learning mechanism for exogenous vision to organize endogenous 3-D solution space, extending human- machine interaction. Results show, interaction of system and visual objects drive endogenous 3-D space that retains trends of behaviors. Retracing activities in real world visual scenarios. To this end, we investigate system- objects interaction that generate sufficient energy for organizing and reorganizing deep layered learning apparatus.

In brief, we combine time and energy in self-adaptive processes, a novel approach is used to detect and map high-dimension visual information to organize 3-D space. A deep learning neural structure is proposed to demonstrate serial and parallel process in proactive vision.

Similar questions and discussions