In cognitive computing, humans and machines work together. How will you devise your cognitive computing methodology so that machines and humans can learn from one another?
Article Information-Cognition cyclical machine - a conjecture for na...
1) How can machines be constructed to learn from humans?
2) How could this be organised the other way around?
Especially, the second issue is quite complex. Are we (humans) interested to learn from machines? Usually, we use these as equipment to learn different things. But, one can assume that a higher knowledge is implied. I recommend to look into psychological studies to get some ideas.
The first question is a bit more easy.
First of all you need sensors which are suitable to cover the situation; this means, cameras, microphones, etc. Moreover, a feature extraction has to be done and finally, machine learning methods could be used to obtain important information from the environment and the interaction.
Feedback is dependend on the task and situation itself. Providing acoustic feedback in a noisy environment doesn't make sense.
Such questions are addressed in several publications. I just mention a few:
- Wendemuth, A. & S. Biundo (2012). ‘A Companion Technology for Cognitive
Technical Systems’. In: Cognitive Behavioural Systems. COST 2102. Ed. by
Esposito, A. Esposito, A.; Esposito, A. M.; Esposito, A. M.; Vinciarelli, A.;
Vinciarelli, A.; Hoffmann, R.; Hoffmann, R. & Müller, V. C. & Müller, V. C.
Vol. 7403 LNCS. Dresden, Germany: Springer, pp. 89–103.
- Wilks, Y. (2005). ‘Artificial companions’. In: Interdisciplinary Science Reviews
30.2, pp. 145–152.
- Wilks, Y. (2006). Artificial Companions as a new kind of interface to the future Internet. Tech. rep. Oxford, UK: Oxford Internet Institute, University of Oxford.
- Walter, S.; S. Scherer; M. Schels; M. Glodek; D. Hrabal; M. Schmidt; R. Böck;
K. Limbrecht; H. Traue & F. Schwenker (2011). ‘Multimodal Emotion Classification
in Naturalistic User Behavior’. In: Human-Computer Interaction.
Towards Mobile and Intelligent Interaction Environments. Ed. by Jacko, J.
Jacko, J. Vol. 6763. Lecture Notes in Computer Science. Springer, pp. 603–
611.
- Siegert, I.; R. Böck & A. Wendemuth (2012). ‘Modeling users’ mood state
to improve human-machine-interaction’. In: Cognitive Behavioural Systems.
COST 2102. Ed. by Esposito, A. Esposito, A.; Esposito, A. M.; Esposito,
A. M.; Vinciarelli, A.; Vinciarelli, A.; Hoffmann, R.; Hoffmann, R. & Müller,
V. C. & Müller, V. C. Vol. 7403 LNCS. Dresden, Germany: Springer,
pp. 273–279.
- Schels, M.; M. Glodek; S. Meudt; M. Schmidt; D. Hrabal; R. Böck; S. Walter &
F. Schwenker (2012). ‘Advances in Affective and Pleasurable Design’. In:
ed. by Ji, Y. G. Ji, Y. G. Advances in Human Factors and Ergonomics
Series. CRC Press. Chap. Multi-modal classifier-fusion for the classification
of emotional states in WOZ scenarios, pp. 644–653.
- Böck, R.; K. Limbrecht; S. Walter; D. Hrabal; H. C. Traue; S. Glüge & A.
Wendemuth (2012b). ‘Intraindividual and Interindividual Multimodal Emotion
Analyses in Human-Machine-Interaction’. In: 2012 IEEE International
Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness
and Decision Support. New Orleans, USA: IEEE, pp. 59–64.
Furthermore, there are lots of publications dealing with HCI and user adapted output.
The question is rather wide and general. I concur completely with Ronald's answer, it is precise, focused and informative in its directions. The use of modern classifiers supervised and un-supervised can be especially important building blocks in such learning mechanisms, we can here look into Support Vector Machines (SVM) and other kernel methods, various forms of cluster analysis also carry considerable potential and can complement the former.
I think you should design your machine in a structure like the human thinking structure as you can! (this is a difficult and open problem task). on the other hands, you should give some instructions and make a sort of instructions for humans to think and act like the machine structure (make some rules for human to become a little regulation in interaction with the machine). reinforcement learning based algorithms are a good group of algorithms since they learned by interaction. Although it is very difficult to make a common structure (at least a common state and action set) for both machine and human, you can approach to it by knowing the task very well and specified the rules for human and machine for a defined task.
I worked on both human cognitive behavior and reinforcement learning algorithm. maybe i can give you more information if you say your task or application in details.
Naively, I'm not as focused as the two prior respondents. Wouldn't one first ask the question, "learn what?"
Learn to drive, sure. Google is already doing that. It's a rule-based, sensor-augmented mechanical process.
Can the machine be taught to avoid pain? Maybe, if it is provided reinforcement control and pain is defined in terms of damage avoidance.
Can the machine teach discriminating physiological triggers of pain (frustration, hunger, anxiety, distress) to the human? Or learn to experience same? Maybe not as easily.
Can we build a engine of cognitive semantic exchange? That's pretty tough and the best efforts to date, regardless of how sophisticated they appear, really are exhaustive annotation from multiple disciplines.
Humans can learn from each other, so no change is needed here. There is little difference if a human has in front of him another human or a humanoid robot. The human learns from both.
But how about the case of a machine learning from a human?
It looks to me that the “machine” needs to be a humanoid robot. Otherwise the sense input and limb output is not the same between human and machine. No learning can be done by the machine.
For the machine to learn, we need a humanoid robot with a brain that is programmed to learn all activities. Today most experimental models have all their activities programmed. Such a robot cannot learn from a human. It can only be programmed. But a robot with a brain, build for learning all, can also learn from a human being. So the “feedback loop” exists.
If you are interested in the brain of a learning robot, see:
Hi Walter Fritz, Thanks for the response. Do you think it is a good idea to devise a centralized brain for a humanoid robot. Instead, can it be distributed intelligence.
Hello Todd Morris, like the way presented reinforcement learning in this context. What should be our strategy to train machines to learn to exchange ideas between themselves and with human beings?
Hi Mohammad-Ali Nikouei Mahani, do you think we can have a better approach compared to reinforcement learning algorithms. Though they are interactive, do they have a scope to make the knowledge acquisition bidirectional. I would like to know how can I structure what I learn from machines as well.
Hello Ronald Böck and Irgens Chris, Thanks for sharing the details and comments. I will go through them and revert. In addition, my question was also about automating this cognitive feedback between humans and machines. How can we ensure that in each interaction between a machine and a human being both the parties are learning from each other and also contributing this knowledge back to this process of semantic exchange.
It seems to me this second question is about how we know what the human has learned from the robot, since the robot can report it's own learning back to the semantic web. The answer is that we have to probe the human with quizzes or have them build their own semantic web to report back to the system.
This reminds me of "Betty's Brain" a project I heard about where students learned by teaching the machine a set of relationships with regard to a particular subject. They formed a semantic web in "Betty's Brain" and Betty, the artificial intelligence used the web to answer questions from quizzes which were then marked for correctness.
In order to track the learning of the human operator, they monitored their use of the resources, and the "Progress" of Betty. When you move out of the controlled atmosphere of grade 5's however it might not be quite as clear how much the human is learning from the robot.
Gokul, I think of the 'meme' [http://en.wikipedia.org/wiki/Meme] as a container.
If the idea is worthy (weighted value) then one meme holder (machine or man) may pass the meme (high in concept, low in data) to another, for consideration. When/If a positive value-exchange occurs between two or more parties which have judged the meme container content exchange to be of interest, the meme can be populated with data by all parties and its 'worthy' value is increased.
This semantic exchange model borrows heavily from the well-proven TCP/IP header implementation: (interest) "Is it for me?", (construct) size, count of pieces, point-of-origin, etc., (content) packet.