yes BCI tries to control machines and robots by our brain signal analusis and were done some projects for example control, a robotic arm by EEG signals.
yes BCI tries to control machines and robots by our brain signal analusis and were done some projects for example control, a robotic arm by EEG signals.
the day robots would be able to read our minds, we would become slaves by then. but this may have other important implications as well, like in healthcare, EMS and preference theory. but there are risks as well since they might act in a way that would likely fullfill their existential interests unless we let us decide whom to let access to our mind. indeed, we may be able to share our memory and thoughts through a common mental gateway, but thats a complicated business too...
check this out: Media Network Center: http://www.wiz.cs.waseda.ac.jp
Thanks Mehdi. You can read Asimov's Three Laws of Robotics and ' Zeroth Law of Robotics" of Daneel.
The zeroth law of robotics states that:
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm".
Naveena: That might be possible as a matter of fact the concept of mind reading is a complex one where one needs to define first what our "mind" really is?
Is it consciousness or memory, a mental state or cognition, metacognition or physical and biochemical memory protein, or or neuromagnetodynamics? How can one define mind would depend on how one describes the attributes of mental medium-the mind, or its cognitive interpretations.
Other way, if machines are to read our mind we would definitely read back their bionic artificial mind as well- a two way mental gateway i would suppose.
Akash: some intelligent systems are already in place and in application which are able to read as well recognize handwriting, lip reading, face recognition, which are all complex pattern recognition aspects.
If computers or AI units would ever be able to read human minds and indeed they can achieve that in future, then the future of software programming is very bright., just think about thought-based coding and thought activation systems!?...
Sidharta you have a bright mind and your opinions are really interesting for me. Our brain and specially brains of typical persons with normal actions can be supposed a black box with very complex dynamic system inside it with reactions to some perceives , then it can be identified by machines even our emotional actions. Terminator movies mentioned this idea but i believe for some special persons it will be impossible for machines identification and prediction of behaviors. Now computers can identify and predict dynamic of complex systems by machine learning methods that we can't do it. But they will not be able to feel some of our emotions.
Thanks Mehdi, we all have bright minds and i am just a follower of the faith. You are greatly correct to say that emotions can be identified by machine intelligence some day. the special persons you mention are those who are unpredictable, random, instable and volatile, or say, uncertain. And so would such machines need to deal with ambiguities and uncertainties.
For like Wright Bros. who stopped imitating birds and started to learn about aerodynamics, they conceived the real flying machines-aircraft. So in much similar tune could we say that using human mind as a blueprint to build symmetric machines equivalent to the working capacity of the human brain would solve all our problems and serve all our purposes?
Now, here lies your thinking out of the box that you have mentioned and its very important that we stop imitating human brain and think about real Psychodynamics of mental activities. We often see a very short distance and often ignore the plenty of it we would ever need to cover.
Empowering AI systems with creativity and imagination would no doubt increase their efficiency, but that's not the end of the road. Where is the guarantee that AI robots would become good citizens and would be bound to obey all our orders? For that, we will require time-variant rules that would adapt the systems according to our choice. There are limitations in human mind and so would be in machines too.Yet, you can indeed derive extraordinary thoughts from simple actions-and here lies the technology.
Or are we gonna make machines more dumber by teaching our bogus routines that led us to where we are today? You can think on this aspect in some details if you wish. It would be then just bogus 'metal dynamics' rather than 'mental dynamics' I would suppose.
You can also read about Fuzzy logics of Dr. Lofti Zadek, that was one of the ways to think out of the box. You can also refer to Blue-Brain Project.
Ok the question was "Is it possible to create a robot which can read our minds and act according to it?
".
Two parts to the question : Thought Reading + Robot (Or any machine acting upon the thought)
1) Thought reading already exists but is still in its basic roots. It involves mapping brain wave patterns through EEG signals to specific thoughts. For example if you wish to accelerate a car, you need to pre-record the EEG signals(pattern of EEG signals) from your brain whilst you were thinking of 'accelerating your car'. After this training is done, whenever you think of 'accelerating your car', similar(not exact) brain wave patterns emerge and the EEG equipment will pick it up and compare it to the pre-recorded pattern that is stored on a computer. If it is similar enough, then you will can get any desired output.
Misconceptions: If you now think of "turning left/right" in your car, the car IS NOT going to turn left/right. Why? Because you have not pre-recorded these signals hence the computer will not recognize the patterns.
2) Now that you have a desired output from your 'thoughts' you can do what ever you want with those signals and use these signals to make a robot(any machine) do what ever you want it to do. It will be just like having a 'remote control' in your hands, except instead of pressing buttons you will be 'thinking' of something.
It already exists. In think in Germany, there were successful attempts to drive a car with just their thoughts. So imagine replacing the car with any dream robot of yours. A robot butler perhaps :)
Dear Nitish: You are quiet correct to mention that Germany has successfully experimented with intelligent cars that that are thought driven, but I would like to mention that those initial tests were first initiated by America's DARPA which organized annual race for such cars way back in the 1990's. Then, Google hired some of the engineers from DARPA to built a self-driven car that was quiet successful. They used to equip their car with a myriad of sensors and a combination of LIDAR (Light detection and ranging) and with radar sensors that used to monitor their environment continuously by computing the distance to nearby objects. So, in a sense, those were not purely thought-driven. However, It was Germany's Free University of Berlin Research Group designed the first AutoNOMOS which is indeed self-driven and completed 80 kms during test drive as their 'Made in Germany' car. The car was driven by computers with a driver behind the steering for safety. The commands from the computers were fed to the system having wired directly to accelerators, brakes, and steering wheels while multiple sensors armed with motion detectors and visual mapping of the road analysed information about other vehicles and people on the streets.
Thus, 'Made in Germany' is the first automated car ever licensed for simulated driving using brain waves recording of eeg signals and tested on the streets as well as on the highways in Germany. The project was headed by Raul Rojas.
Following their success, several other groups also ventured for the same and AI was soon applied to autonomous helicopters that could learn to fly on their own by 'watching' others how to fly. They learned flying by observing the flight of radio controlled pilot 'Garett Oku'. The helicopters carried gyroscopes, accelerometer, GPS, magnetometer and flight motion simulators that collected feedbacks continuously and adapted its trajectory to stay in balance in the air.
Now, we already have motion critical aerial unmanned vehicles MCAUV, used in combat surveillance by DARPA and others.But as a matter of fact, they still do not actually run by thought-driven maneuvers.
The EEG or the electroencephalography that captures brain waves as electrical signals generated by stimulation of certain parts of the brian when in action and thought, dreams, sleep and reasoning and then correlating and measuring those signals to some thought process is at its infancy now.
Correlation of specific cortical area stimulation to specific tasks are studied using fMRI and TMS which are non-invasive but the problem arise when multiple signals are generated by a single task or say, multiple area of the brain cortex is activated and the signals overlap. But things are moving fast and we could soon expect such thought-driven simulations. We already have mind driven artificial limbs and so, this is no surprise that AI can be applied to automated mobile vehicular systems.
German engineers have already conceived such mind driven automation which meets mind control to cruise control called Brain driver'.
Hence, brain scans using functional imaging and wave detection technology can be a better viable alternative to EEG signals, or vice versa.
check this out:
http://www.lifeslittlemysteries.com/1126-mind-control-thought-driven-car.html
Brain control of Artificial limbs does not involve brain scans. It uses direct signal capturing from neurons/nerves of the limb that has been amputated, hence has nothing to do with thought control [But, neurons of the hand do get activated when you think about moving your hand whether you have an amputation or not. The activation or nerve impulse is picked up by electrodes that are surgically implanted]
EEG signal acquisition is more viable than brain scans through imaging because of the cost and immense amount of data that has to be processed very quickly if you want a real-time thought controlled device[Imaging devices are very large as well]. Granted, its not very accurate but it is sufficient to do the job. Advances in this field are actually under way but its very slow due to lack of funding and also because it involves professionals from independent fields of study - Neuroscience and Engineering where the scope of integrating both is very less. I know this because I'm a Biomedical Engineer. Even the link you gave to me showed them using EEG as the means of signal acquisition.
AI has nothing to do with this topic apart from pattern recognition of brain signals. What we're talking about is robots that will work as directed by our thought.
The New Minds of Artificial Intelligence: Some Thoughts
Rethinking AI under new Paradigm:
AI is indeed a true scientific domain without doubt that has been wired by other specialized disciplines including linguistics, NLP, Philosophy of thought, Natural intelligence, Psychosomatology, Biomechanics, Neurodynamics and to some extent Neuroeconomics including the aspects of human decision making. Scientists and futurists like Markram, David Levy, Zadeh, Newell and Simon as well others were all optimistic about where AI is headed. Their theories grounded in the early 1950's and 60's immensely influenced thinkers of the present generations of the AI domain and there is now a whole new approach to redefine given what we know about the real machineries of the human mind, and about technical systems of synthetic creation. If creating such 'intelligent machines' has been their new endeavor, then, redefining human thought is another approach to link human mind to machines as Neil Gershenfeld, Director of the Mind Machine Project (MMP), has envisioned, as creating real intelligent machines -"Whatever that means". This time around, they are out to correct some fundamental misconceptions buried deep inside the realms of AI research over the years. This endeavor in part, is perhaps to emboss both "Mental Equality" in man and machines as well, impart"Mental Uniqueness" in machines itself. I believe, this would, in times to come, may help redefine human-machine interfaces in such great spirit. An example can be sought out by simply observing how challenges in intelligent automation in transportation would likely be dealt with when they are deemed to run by mental aides or thought invoked.
Constraints of Thought:
There are certain constraints that require to be dealt with before such final applications are deemed to be commercialized in full scale. Particularly, with thought based models that may run the risk of overreading human thought and overacting human desires. Likewise, while driving on the wheels, even thinking about acceleration or brakes may prove to be catastrophic, since, thought based would require to be supplemented with voice activation of whether to allow the system to follow such decisions or not.
In essence, the machinery of the human mind is very complex and our thought processes are often random and quiet unstable. We think too many things at the same time which would raise ambiguities for a machine to understand and follow in such parlor. The fundamental assumptions in AI if they are to be based on the states of our memory and thought, would require to be followed by through quantitative paradigms, rather than by mere qualitative aspects. These are indeed hedonistic imperatives that would define both machine as well human intelligence beyond memory and cognition, and hence, would be too difficult to emulate in machines. Indeed, modeling thought in physical forms is by itself an intriguing job since there are many shapes of thought that define human actions. Integrating disparate pieces in a more compatible way for machines to follow and hence to seek such universal theory capable of modeling organized thought is perhaps beyond the realms of modern AI systems, but would be fabulous for fictional fantasies. The supercognitive models of thought and cognition, consciousness and feeling as perception of realities had been the cornerstone of human mental evolution through the ages of reasoning with datasets that we perceive as ambiguous and inconsistent, and through our judgemental inference of both substantive and procedural rationality do we understand what and by how to understand the nature of those realities that often present to us as uncertainty. This could be too difficult to model in machines since they do not have their own foundational paradigms and merely rely on us and where, human mind have certain limitations.
Mental Equality and Mental Uniqueness:
Hope this issues will be covered in near future. Yesteryears breakthroughs that led to today's foundations of the Scientific thought of AI could well lead us to tomorrows sparkling inventions mind bogging enough to catch the fantasies of our future generations. On such visionary voyage of the machines that they would collect information as inputs and convert those into intelligent yet effective decisions as outputs smart enough to be categorized as true 'peers' of the hominid specie. And those will be the "New Minds" of artificial intelligence.
Nitish; I would like to thank you for contending on the above issue which is but not my personal claim yet a proven experimentation by a group of Canadian scientists who are indeed working on brain control of artificial limbs involving brain scans. The link is below and you can see it by yourself.
http://www.engadget.com/2011/07/05/canadian-scientists-scan-your-brain-know-how-you-want-to-hold-y/
So, this page has something else to say about prosthetic limbs motion control aspects using fMRI scans and this may hold good for the future when we would be having portable scanners. That day is not far either.
But it is indeed good to clarify each other which would help to clear doubts about such theoretical models.
As also, AI is just not about pattern recognition as there are numerous other techniques behind robotic biomechanics, i.e., ANN, Fuzzy Logics etc. since, automated motion control just do not work on pattern recognition. In fact, i have written a paper on pattern recognition titled " The Neuroeconomics of Learning and Information Processing; Using Markov Decision Process", though a formal paper, yet, you may find something on the subject.
http://ideas.repec.org/p/pra/mprapa/28883.html
I'd say yes, of course - As others have already indicated, rudimentary recognition of brain activity patterns associated with specific motor neuron control is already a reality and being used to control wheelchairs and even prosthetics for example. It won't be long before mass sampling techniques yield sufficient data for the precise characterisation of these patterns, so paving the way for production of machines that can read minds accurately (without a learning phase)and act accordingly. Whether the response to such observed stimuli also includes an emotional or 'moral filtering' component, is down to us and the way we decide to programme these machines in future. Those that say machines will never be capable of an emotional response to a stimulus chose to ignore that we humans and higher order animals, that do so react, are but biological machines programmed by our genes.
Presant technology, noway to read our mind. But instead, we can make or train robots o understand our body (unique to individuals) language, our speech rythem, to undrstand our mind and consequenty act accordingly.
Bruce: Indeed do I agree that we are biological machines programmed by our own genes, but its also true that genes do not always affect all our behaviors all the time. Yet in the case of AI - based robots, their behavior repertoires are limited in a sense that they are only programmed to behave in a way we would dictate, rather our programmes which are but algorithm often based on heuristic rules and routines i.e., they are still not able to think independently and out of context.
What differentiates our behavior is the special aspect of our ability to transform perception into respective actions which our hypothalamus is programmed to do. Areas of the brain responsible for emotion and emotion control cannot be directly replicated in machines unless we apply nanotechnology and something beyond that. We should also empower such machines in such order to be able to enable them to control and learn from social situations, else, they would just turn into ravaging beasts...
Our receptor proteins not only are able to detect patterns, but are able to recognize "differences in such patterns", and amongst patterns, and are able to detect randomness, which ofcourse, machines do well enough. So, i am not able to figure out properly if you could explain in little detail that you have mentioned they can read mind without a "learning phase", which contradicts that machines are learned idiots.
I think Ravi did mention that we will need to "train" robots to let them understand their direct and indirect environment so in a sense, they could act accordingly which is indeed a learning phase, is it not if i am correct enough?
Sidharta: Your description of the complexities of human intelligence is worthy but perhaps unnecessary in this context. My point about the emotional or moral filter, is that machines can and will be programmed with responses that also mimic our emotional subtexts. Human emotion is a psychophysiological experience or response with biochemical and environmental influences. It manifests itself in expressive behaviors, based on an empathetic desensitization, and qualified reflection on past experience (memory). But given the almost limitless storage and recall capacity of current electronic memory systems, there is no reason why a sufficiently large number of socially acceptable response scenarios could not be programmed into a 'look-up' table to satisfy a human correspondent that emotion is indeed being applied as a component of the response from the machine.
With regard to 'reading minds' accurately without a learning phase, I was referring here to the probability that the response characteristics (observed patterns of activity in the regions of the brain responsible for 'switching on' physical actions) could eventually be sufficiently well understood and accurately interpreted, to allow a programmed response without recourse to a cause-and-effect learning stage.
As mr.reddy says,instead of car remote we are using brain waves, wrong signals will be filtered out.to understanding mind pattern eeg waves, by sensor may take time.
Dear Bruce:
The journey for the quest of intelligence is long and endless, whether artificial or natural. So, we would require to include as many such "unnecessary" contexts or subtexts eventhough they may appear wired. The new wave of theories to define artificial minds hence may include whatever necessary that might make some reason, some albeit.
Now, what is the real purpose behind recreating the human mind in artificial sense? To solve problems faster, to predict uncertainty in a more smart way and make our life more comfortable i suppose, not just to win marathons of intelligent race by designing such artificial minds for recreational purpose. We cannot afford such grandeur expeditions when our economies are out on the streets begging for alms.
As you have mentioned about the emotional subcontexts which could be mind read, and still again, that human emotion is a psychophysiological experience or rather have a biochemical basis, which however does not accurately correspond to the real mechanisms that machines work or respond. Its because our environment is unstable and though given unlimited storage capacity and the ever increasing power of information processing, it would be enough to read our mind just by recognizing some definite patterns, but not enough to read and decipher our inner feelings which are however, neural.
Machines however smart they may be someday, there would still remain some physical limits of cognition, as we have our mental limits of cognition. Our operative mind is a center of creative energy, and it would take perhaps another few decades or more for machines just to match our inherent creative energy of just a single human being. We have a special capacity to perceive not only through our five senses, but also veryoften, we utilize our sixth sense for "intuitions", which we may call natural randomness in psychological environment.
Things that we are able to see, hear and feel can we perceive, and perhaps something beyond that. Even the best of the programmers would find it really difficult to emulate such perceptual cognition through hardcode programming. Neither can we provide complete information to our codings nor can machines derive the same. The behavior of a programme will decide the behavior of a machines just as our genes often dictate our inherited habits and behavior. Machines cannot
We are out to reverse engineer the natural process of intelligent evolution and mental development, and what you have mentioned, is just an example of rudimentary recognition of brain activity patterns associated with specific motor controls, not such emotional responses to stimuli which is sensory, eventhough I agree with you that future machines will be designed with perception to perceive through visual, audio and biosensor devices. Presuppose that even if machines would be programmed or are being programmed using genetic codes or rhythms, their "expressive" behavior, however, is to be programmed by us, not by themselves(think about the consequences of machines programming or writing codes for another machine)!!! And that would be real intelligence since they would require to have the capacity to think beyond the context or subtext.
We still do not have the clearest of the ideas of how our own brain works, so how can we design such machines without those complete knowledge of the working of our mind in actions and thought-yes, I will definitely mention "thought" since we can think and interpret those thoughts in many shapes or forms that machines will find difficult to emulate, unless we develop some biological co-processors to match those. Hence, I think, those suppositions are not entirely out of context, and since we need to develop machines with an open mind, that would require them to think and relate things among various contextual as well non-contextual facts in open mind.
And if you are correct enough, i would like to congratulate you since that would mean we are heading for such "artificial think tanks"!!!
Thanks.
We are training the robot computer to understand, by first asking us to think to turn car left, the brain eeg waves are recorded, in the database of robot computer, after several sampling, fuzzy logic after considering all other possibilities,when we think to turn the car left, the brainwaves are compared with stored data base, and just like remote control, the turn left mechanism of robot controled car is activated.
Yes Ravi, Fuzzy logic is a device that 'fills' the gap between "intelligence" and "logic". Suppose a system that always go by logic may not be that intelligent as going by pure logic may end up in dumbness. Any AI system's goal is to act intelligently yet in logical fashion, which means that the system devices its own intrinsic paradigms within the context of logic, just enough to make space for the system to act as and when the situation demands, that is, more intelligently. Fuzzy logic deals with the problem with having set integer. Actually, the two values of the binary system, 1 and 0 seemed a difficult logic to account for degrees between two variables measured by two Boolean parameters. In order to account for such an all encompassing logic, Dr. Lofti Zadeh came up with an idea of "Fuzzy logic" which is a system intelligent enough to account for such degrees between two variables. So, it is a type of reasoning which is more encompassing that arise out of the limitations of BL.
Since boolean logic is an idea of extremes that can account for only two values, 0 and 1, where you can set only two parameters 'yes' and 'no' as commands, fuzzy logic incorporates a third parameters or a chain of events by real perception of the environment within the paradigm of 0and 1.
Intelligence and logic are both important aspects for reasoning. Systems need to reason by using some underlying logic and adapt accordingly through those parameters that we call intelligence, or an attribute of reasoning using variations of logical phenomenon and if such systems of intelligence are not incorporated, it might simply go by logic without intelligence and prove catastrophic. Yet, a system cannot just reason by intelligence without having any such underlying "logic", which is indeed an art and science of applying intelligence to systems to generate 'smart' behavior.
Fuzzy logics are used in some of the modern intelligent machines such as in cars fitted with automatic transition gear boxes that ensures that the car changes its gears smartly not limited by constraints of boolean parameter. In many electronic gadgets and home ambiance systems are nowadays equipped with fuzzy logic systems. So, while reading mind with emotional filters, applications of fuzzy logic may endure the system from wrong decisions, as Bruce mentioned about previously.
As John McCarthy has said in his paper that mathematical logic is not a simple language and there are many types of logic, each with unique behavior of its own and so, i think that Fuzzy logic is one such attributions of mathematical logic.
Sidhartha, you seem to be paying a lot of attention to "Techniques" rather than "Process". Fuzzy logic and AI systems have to be used and that is a given! But that is just one block in a very large block diagram. I absolutely commend you on your competence with AI & Fuzzy logic, but I have to tell you that its just one part of the problem.
I'm hoping, we can end the brain storming here and stop brewing over the same issues over and over again. Unless somebody has something NEW to say, its fair that we mark the thread Solved until proven otherwise.
To discuss more on individual issues, i think we should start a new topic for the sake of not spoiling a very nice question.
Bruce: Having mentioned all the above little inferences about the use of logic and variations of such, there is indeed a point where you are definitely correct to point out, and within very contextual paradigm, that sampling techniques may indeed prove to very usefull in defining machine emotions. Yes its true that I was out of context but not totally out of the subjective analysis since your point truly make real sense, regarding:
-on the reflection of past experience where response scenarios could be programmed and recalled via "look up table" is a viable alternative where behavioral responses corresponding to human can be programmed, as many as possible and their variations sampled via some emotion filters to record and match signal patterns specific to such emotions.
Yet, one thing is really problematic here and may require further thought. What if such emotional patterns overlap? Or say, while driving such autobots, if our thought process is unstable or fluctuate randomly so in a sense, that could be really confusing for the program i suppose. This would require continual feedback maneuvers that would be directly fed into the computer to keep the car on the track, and not behave otherwise. The question then is, how good is such sampling filter. The filtering component in that sense, well, need to be very adaptive i suppose.
Any compelling evidence on this topic would be really helpful.
Dear Nitish: We are very much into the topic and there are some real discussions going on which is based on the above topic how to create systems or robots that can read our mind, and indeed, many collateral thoughts and compelling and contrasting ideas are exchanged. Well, it may sound little fuzzy your disposition on the above, but we are all learning from each other and this is a wonderful opportunity to correct our doubts and relate to other interdisciplinary ares of thought on the subject. This is a topic head and so is one is free enough to go as far as possible by not limiting our understanding of the subject. Its not a matter of competence or failure, rather we may expand our horizon a bit further to include in this beautiful topic the beauty in both technique and process, which are both important and interdependent.
Mind conrolling difficulties, wrong mind commands, then by hand if we turn right, while our mind orders left, here also erring is possible, we are not hitting truck headway,by time,correct mind coordination commands will be achieved, fuzzy logic is more or less correct,the degree of fuzziness tolerable we can manually set epending upon individuals.
As I understand it, the main problem, now that we have Self-Organizing Maps, is to determine what data is flowing through an area of the mind, and monitor it. It was much easier to read the muscle memory of a monkey, than it would be to read the mind of a physicist, and steal his theories before he can write them.
With the MEG, a squib based device that can read millihenry's of magnetic moment, it should be possible to monitor electrical currents within the brain, but the sheer wealth of them, is a barrier to computation of what they mean. With the VEEG the surface currents become available in a manner that is much more conducive to placement in the brain, than the old EEG, but there is the depth of focus problem still where the further the current is from the surface the harder it is to monitor.
fMRI seems to have some promise, but it's accuracy leaves a lot to be desired. Whether we can do real-time analysis of the rich information of an fMRI, is another question as well.
I think therefore that what we are up against is the mechanical limitations of monitoring thought, and the failure of psychology to fully describe the brain. Once these barriers fall, it is entirely likely that mechanically assisted communications, might allow us a range of options not currently within the human reach. What we choose to do with it, is currently suggested as possibilities only in science fiction and speculative works.