Information in the brain is wirelessly read and written fast in neurons and synapses from/at the molecular level. Can one use artificial intelligence and machine learning to build such machine?
"In order for the machine to produce consciousness, it would require the machine physically interact with matter in a similar way our brains do to allow latent consciousness to emerge"
I think this is key. In this view, our 'consciousness' arises from our internal processing coupled with our interaction with the physical world. I would be sceptical of any argument that consciousness (or intelligence) can arise only as a consequence of the manipulation of symbols - work on situated/embodied cognition seems relevant here (eg “Intelligence without reason,” R Brooks, Proceedings of 12th International Joint Conference on Artificial Intelligence) though I would not claim to be an expert.
Maybe you can look at the Turing test (well, actually more related to: can we build a machine that mimicks a human?). Though some claimed to have passed it, it seems they had not. A good blog on it: http://www.scilogs.fr/complexites/non-le-test-de-turing-nest-pas-passe/ (sorry, in french...)
We do build machines, all kind of machines including some with artificial neural nets. Anyone here might be such machine. All I see are pictures with name. so if after enough discussion I find that you seem conscious then you are conscious for me even if you are not biological machine. But so far noone has built one to my knowledge that would pass such a test and no one even propose to my knowledge which pass my realistic test appraisal. Can we build such machine? Maybe yes , maybe not. If nature did it does not imply that we it can be done artificially. The only one that exist on this planet are living animals living in interaction in their living environment. They are primary conscious of their interaction in this environement. So a good place to start is autonomous robotic. Maybe at a very advance stage in the development of this science we will discover higher level of autonomy which will translate into intelligent/conscious behaviors.
Igor Alexander in his little book "How to Build a Mind" [Phoenix paperback, 2000] talks interestingly about conscious machines.
However, wearing my metaphysical idealism hat (following Berkeley, Foster), I find reason to question the existence of matter independent of mind. That means that the notion of a "conscious machine" gets to look the wrong way round -- it's more a matter of conscious minds inventing and working with the notion of a machine -- and machines don't exist as fundamentally as minds do.
'' I find reason to question the existence of matter independent of mind.''
I find reason to question the existence of mind independent of a body.
''- it's more a matter of conscious minds inventing and working with the notion of a machine''
It's more a matter of bodies with an nervous system linking them to other bodies and operating within a natural process where successfull operations are reproduced further than less successfull operations.
''and machines don't exist as fundamentally as minds do.''
There is no separation between a body and a mind, the nervous system is part of the body and it is related to the evolution of the interactions of these bodies.
This is a very fundamental question on existence. According to age-old yoga philosophy, built on the subjective experiences of numerous yogis over the years, Consciousness is the ultimate reality of existence. If someone is keen I would suggest to read authentic books on Yoga by Swami Vivekananda written in modern times for the westerners. Vivekananda wrote from his subjective experience. This is what I call "Subjective Science" in contrast to material science (Objective).
Please read my article in RG, "Science and Spirituality are complimentary to each other..." where I discussed this aspect in details citing many examples.
I think the first question is: What does conscious mean, what are its essential characteristics? After that, you can to try to implement it. Up to now there several theories addressing conscious definition, its ontogeny and phylogeny.
And BTW, replication of molecular structure, even at neuronal level, does not guarantee intelligence.
The interesting answer from Tushar Kanti Ray suggests that for BOTH "age-old yoga philosophy" AND Western metaphysical idealism (Berkeley/Foster) "Consciousness is the Ultimate Reality of Existence". Are these two systems of thought essentially the same or do they differ in important ways and is so, in what ways?
[I suggest we ignore the Judeo-Christian agenda of the Western tradition which is inessential to it]
I mean to engineer the machine, to implement it. One can build an airplane, a computer..... Sometimes theoretical approaches followed
Dean Horak provided an excellent suggestion:
"In order for the machine to produce consciousness, it would require the machine physically interact with matter in a similar way our brains do to allow latent consciousness to emerge"
Yes, there are two points of view (and more!) which can be opposed point by point. But what arguments do you see that support one rather than the other?
Perhaps. A machine can process information pieces and create relation between the concepts, you can think that each concept has relation with other concepts and you can extend all the information about the deep of the concept, eg if a orange is a fruit and orange is a colour, the orange as information piece can be extend to: where is now the orange ? , where oranges grow, change the colour of the orange with the time ? is orange inside too ?. when a brain born, the input of information is a constant that grow second by second, if we implant hierarchical rules of information process, we will memorize associations, a first step of A.I.
"In order for the machine to produce consciousness, it would require the machine physically interact with matter in a similar way our brains do to allow latent consciousness to emerge"
I think this is key. In this view, our 'consciousness' arises from our internal processing coupled with our interaction with the physical world. I would be sceptical of any argument that consciousness (or intelligence) can arise only as a consequence of the manipulation of symbols - work on situated/embodied cognition seems relevant here (eg “Intelligence without reason,” R Brooks, Proceedings of 12th International Joint Conference on Artificial Intelligence) though I would not claim to be an expert.
The consciousness is just a isomophism between neurons or any information system and external phisical objects. There is not a difference between machines consciouness and human's consciousness. See my modeling about consciousness which is a common model both for human and machine.
Is the thermostat on my heating system conscious that the heating should be on or off based on my temperature setting? As soon as the temperature get below my setting, the thermostat is aware of it and close a circuit that start my furnace. According to you definition, the thermostat is conscious that the temperature is higher or lower than I want it to be and takes the appropriate action.
''I find reason to question the existence of matter independent of mind.''
I also agree with that. The devil is in the details though. If it is never independent then we should maybe be carefull of not separating them in our discourse and we should avoid to say that one is prior to the other one.
I think that one good question, in order to address the problem of conscious machines is: why are human beings conscious? This question has sense, because for all living beings, everything must have some meaning, be useful for something (or in the phylogeny, it was).
Of course, it assumes that conscious beings appeared in some stage of life development...
I was no criticizing your approach but I was simply making sure I was understanding what you were saying using a simple automated system. I am not totally convinced but I am not opposed either. I cannot find an argument to exclude this possibility. The doubt I have stems from the way biological organisms have evolved. They are autonomous systems and their evolution corresponds to increase of autonomy in gradually more complex interactions. But I am not convinced that their functionalties are not intrinsically informed by their biological makup in a way that are not functionally specifiable. In other words, they are systems on the surface and it is why we can understand part of them but they are not only that, their dept is not specifiable but this depth may stilll be of a major importance for their action and evolution. If this vague hypothesis is valid then consciousness cannot be functionally specified.
Wearing now the standard physicalist hat, where do zombies (in the technical philosophical sense) fit into this discussion [see e.g. Daniel Dennett - “According to common agreement among philosophers, a zombie is or would be a human being who exhibits perfectly natural, alert, loquacious, vivacious behavior but is in fact not conscious at all, but rather some sort of automaton. The whole point of the philosopher’s notion of zombie is that you can’t tell a zombie from a normal person by examining external behavior”, Consciousness Explained, Penguin, 1991, pp 73]. Assuming the notion of a zombie is coherent (why would it not be?) it seems impossible to confirm that a machine is conscious even if you do actually get to make one! Aaargh!
We naturally think we are not zombie and we naturally can tell if a person is conscious and so the same natural faculty to judge from observation applies to animals and machines. It is the basis of Turin's test. We cannot doubt this because it is what we naturally use and why would we change based on the substance of the actor? We do not need to understand how we judge, we just need to judge the same.
You write "we naturally can tell if a person is conscious" but the definition of a zombie I quoted has "you can’t tell a zombie from a normal person by examining external behavior". So presumably you regard zombies as an incoherent idea. But consciousness has two pretty separate meanings I think (see dictionary) -- conc1 is reacting to external stimuli, conc2 is having an inner awareness of a subjective world. You can have conc2 without conc1, and vice-versa.
I find it true but weird (ie unnatural) that I am conscious (conc2) -- how does my brain (theories of which I have read about, that I have looked at in scans etc) manage to produce the "blueness" of the sky (now and then) that I subjectively experience. Surely that is a bit mysterious. I believe that you are conc2 too, and maybe my dog, but it's not so easy to prove it.
So far the only human zombie I met, were in a state of coma in a bed. But we are all partial zombie though because we are totally unaware of 99.999% of what we are doing with our conscious fully filled with the .001%. It is surely an important bit but for the most part we are zombie. Those defining a zombie as like other person , that we cannot tell the differnce, assumes that consciousness is thus not important. I disagree totally and these assumptions or definitions just put in the premice their conclusions.
The only world that exist for a living organism is necessarily subjective. But it is not an opposition to an objective world since the whole subjective world is about an awareness to participate in the world. Subjective world without awareness is a contradiction. I cannot conceive your two consciousness. There is a single experience. I do not find any problem with blueness of sky. We have eyes, cones, what is it for if not to get some colour information about the world. Nothing mysterious there.
In his essays on ''le rire'' Bergson associated the comic to surprising event where our consciousness detect the purely mechanical aspect of our action. A conscious machine would thus be one laughfing of the mechanic aspects of its being.
Here is a solution to build a conscious machine that would be "laughfing of the mechanic aspects of its being" http://dx.doi.org/10.13140/2.1.2286.5608
Why the first Sputnik? Why a man to the Moon ? Why the Higgs particle? Why the internet ?....
Once Bill Gates said: “If you invent a breakthrough in artificial intelligence, so machines can learn, that is worth 10 Microsofts”.
Building intelligent (conscious) systems that can talk, move and solve problems like us will be the goal of many companies in the future and http://dx.doi.org/10.13140/2.1.2286.5608 is just the first step
That’s the machine that is worth far more than 10 Microsofts .
If one say he can travel to a destination but does not know the exact path then we wish that person good luck and think that this person at least get as much chance to reach its destination or to get closer as far as her basic direction of travel correspond to her destination. But AI pionners, had no clue even about what is the destination/intelligence, did not care much about it, and naively thought that all paths will lead there very shortly, 25 years according to Minski. Half a century later, some are humble enough to realize that without a clue about what is intrinsic biological intelligence, the whole field is just pseudo-engineering riding on science fiction fantaisies and the naive enthusiasm generated by the invention of computers and their rapid technological develpment as their memory capacity and processing speed and reduction of their size and cost were exponential. Although the chance of a traveller to hit its target increase with the speed of its vehicle, but into a vast universe of possibilities much bigger than the number of atoms in the known universe, processing speed in random directions will get us nowhere interesting.
I think this book "Mind Over Machine" may be helpful in finding an answer to your question. I read it as a part of reading for a course about social implications of technology (20+ years ago). The authors argue that machines can be programmed as rational thinkers but that human, in addition to rational thinking or knowledge, have additional type of knowledge they call heuristic (or intuitive) knowledge, which machines cannot have. In other words, machines have a smaller subset of human knowledge.