What are the existing tests for machine consciousness that directly tests qualia generated in a device? I find many proposals, but they only seem to test functional aspects of consciousness related neural processing (e.g. binding, attentional mechanisms, broadcasting of information), but not consciousness itself.
I have a proposal of my own and would like to know how it compares with other existing ideas.
https://archive.org/details/Redwood_Center_2014_04_30_Masataka_Watanabe
The basic idea is to connect the device to our brain and test if qualia is generated in our "device visual field". The actual key to my proposal is how we connect the device and how we set the criteria for passing the test, since modern neurosynthesis (e.g. artificial retina) readily leads to sensory experience.
My short answer is to connect the device to one of our cortical hemispheres by mimicking inter-hemispheric connectivity and let the device take over the whole visual hemifield. We may test various theories of consciousness by implementing candidate neural mechanisms onto it and test whether subjective experience is evoked in the device's visual hemifield.
If we experience qualia in the "device visual hemifield" with the full artificial hemisphere, but not when the device is replaced with a look-up table that preserves all brain-device interaction, we have to say that something special, say consciousness, has emerged in the full device. We may conclude that the experienced qualia is due to some visual processing that was omitted in the look-up table. This is because, in regard to the biological hemisphere, the neural states would remain identical between the two experimental conditions.
The above argument stems from my view that, in case of biological to biological interhemispheric interaction, two potentially independent streams of consciousness seated in the two cortical hemispheres are "interlinked" via "thin inter-hemispheric connectivity", without necessarily exchanging all Shannon information sufficient to construct our bilateral visual percept.
Interhemispheric connectivity is "thin" in the sense that low-mid level visual areas are only connected at the vertical meridian. We need to go up to TE, TEO to have full hemifield connectivity. Then again, at TE, TEO, the visual representation is abstract, and most probably not rich enough to support our conscious vision as in Jackendoff's "Intermediate Level Theory of Consciousness".
The first realistic step would be to test the idea with two biological hemispheres, where we may assume that both are "conscious". As in the last part of the linked video above, we may rewire inter-hemispheric connectivity on split brain animals to totally monitor and manipulate inter-hemispheric neural interaction. Investigating conditions which regains bilateral percept (e.g. capability of conducting bilateral matching tasks) would let us test existing ideas on conscious neural mechanisms.
The generation of qualia requires a special kind of mechanism that I consider to be a wavelike, non-digital one (see attached file). Connection to our brains is not sufficient to test the device, because the signal would either be not meaningful (not detected or harmful) or be converted to our wavelike system and generate qualia even if the source is not a real quale. The test should be based on non-verbal reports by the system (e.g. emitting self-generated sounds, or drawing pictures as the elephant in the link below) and judgment by a group of human beings.
https://www.youtube.com/watch?v=yQv5mE42Yos
Thanks for the answer!
Before diving into your theory, what do you think about the philosophical zombie argument as in the third slide of my talk at UC Berkeley (linked video above)? The reason that I believe that we need to connect the device to ourselves and test it objectively is exactly because of its hypothetical existence.
Dear Masataka, my congratulations for a very sophisticated and testable theoretical hypothesis. I did not have time to watch the whole presentation, but can return to it if necessary. My comments:
a) I agree with the theoretical hypothesis of a whole-brain (inter-hemispheric) consciousness wave, and have argued that it is an ionic wave conducted by the astroglial network (Pereira Jr and Furlan, 2010, attached) that involves quantum entanglement (Pereira Jr., 2013, attached);
b) This phenomenon does not contradict the laws of physics; on the contrary, it is predicted by quantum theory;
c) The ionic wave that supports consciousness is not essentially inter-hemispheric; it also binds cortical and sub-cortical processes, as well as anterior and posterior cortex, etc.;
d) I wonder why a conscious machine would need to have two hemispheres like the animal brain;
e) I do not believe that qualia are just abstract or symbolic features encoded in higher sensory ares. I have argued that qualia are feelings instantiated in the ionic wave (they are not spatially located in the brain, but they are specified by the form of the wave). In this interpretation, "yellow" and "blue"are waveforms that appear in an integrated conscious episode composed of spatially distributed dendritic graded potentials integrated into a whole-brain ionic wave.
Did I misinterpret your proposal?
Best Regards,
Alfredo Pereira Jr.
P.S. I did not comment on philosophical zombies. My understanding is that the test of consciousness hypotheses is necessary because of the scientific method, not because of the possibility of zombies. This possibility is controversial.
I don't understand quite what you are testing Matasaka. We have known for decades that if you stick an electrode in a brain and give a voltage signal you can get qualia and the qualia are entirely dependent on where the signal is arriving in the brain. They have nothing to do with the nature of the electrode - whether it is silver or platinum or connected to some lab equipment or an electric toaster.
Qualia are determined entirely by where signals arrive in a brain - which brain cells they are inputs to and which synapses on those cells. They must occur at this site of arrival because there is no mechanism for transferring complex patterns across brains. And anyway, why would one want them to float around anywhere else?
The qualia we discuss have the privilege of being associated with inputs that are linked up to a monitoring system that is also linked to short term memory and speech. Inputs to cells in your cerebellum can be expected to be associated with qualia too, but since they are not linked up to the monitoring system that feeds short term memory and speech your cerebellar cells will never be able to discuss their qualia with mine. It seems reasonable to attribute something of the same general category as 'qualia' to all events, including inputs to semiconductors in computers, but it seems unlikely that these gates will have qualia patterns with large numbers of degrees of freedom that correlate with an external world in the way ours do.
I am currently working on a review of such tests (for machine consciousness), and it is true, in the literature you will not find any practical test for qualia (or P-consciousness)- just functional, behavioral tests, or tests that rely on heavy assumptions, generally the acceptance of a specific model of consciousness; and there are plenty of those, so who is to say which (if any) is correct?
The "other minds" problem is still whole :-)
Dear Alfredo,
I apologize for my delay in response. Let me look into your papers and reply. It sounds interesting!
About philosophical zombies, yes its existence is debatable. So the better way to put my point is that any objective behavior can be, in principle, emulated by a program that has nothing to do with consciousness. What do you think?
Dear Jonathan,
Nice to see you again and thanks for the response!
"It seems reasonable to attribute something of the same general category as 'qualia' to all events, including inputs to semiconductors in computers, but it seems unlikely that these gates will have qualia patterns with large numbers of degrees of freedom that correlate with an external world in the way ours do."
Yes it seems reasonable that qualia may occur in computers someday, but the question is how you can actually test it. Any ideas?
You cannot test it, Masataka, because the golden rule of qualia is that they are always totally proximal. They are never available for confirmation other than where they occur. This is basically the 'other minds' problem to which the rational solutions are either solipsism or, being more magnanimous, a form of panexperientialism as a default concession to Ockham. Anything in between, like prefrontal cortex has qualia but not cerebellum or computers, is irrational and uncalled for. But the choice will only ever come from inference from circumstantial evidence.
To my mind the interesting question is why, given that we have qualia here and now, they are of the pattern they are in the dynamic context we find them in. If we can come to understand the rules of correspondence between qualia and dynamics within us we may then be much better placed to speculate of what sort of qualia there would be in an artifical nervous system if we go for panexperientialism rather than solipsism. So it is not a question of proving that there are qualia in some context - that is dead in the water. It is a question of trying to work out what sort of qualia there might be in a machine if we are generous enough to speculate that qualia are everywhere.
The first step in this process for me is to realise that qualia do not belong to 'persons' and there is no reason to think that there is only one set at a time in a brain. If we think there might be qualia in machines then we should accept that there may be all sorts of different sorts of qualia all over our brains all at once. Most of these qualia are not connected up to monitoring systems that allow them to chat to other qualia in other brains (or similar copies in the same brain if talking to 'oneself'). So they are as ineffable as those in the computer. But if we are being magnanimous we have to be consistent.
Dear Masataka, you asked my opinion about the statement: "the better way to put my point is that any objective behavior can be, in principle, emulated by a program that has nothing to do with consciousness".
I disagree with the "any". Some (or many) behaviors may be, but not all. There is a class of behaviors that occur IFF the system is conscious. However, this class cannot be defined 'a priori'. I propose that an effective Turing test should be organized with a jury (composed of human beings) to decide if the machine is conscious or not.
For instance, I have proposed to Pentti Haikonen (private communication) to insert an "affective module" in his (assumed to be) sentient/conscious robot, to express its feelings to us. The best language for this affective module would be music. I understand that the structure of music (scale, harmony, melody, rhythm) corresponds to our brain's metamodal language of feelings (the AM waveforms that I discuss in my papers). If the robot plays music to every feeling it has, an external observer can judge if these feelings look like genuine.
Alfredo, I find the idea of music quite interesting! Could you please expand on that? Do you mean that the robot 'registers' the waves of music played and then should try to play back a music that somehow corresponds to 'its feelings'? If so, then couldn't we potentially program the robot so that it picks up the mathematical patterns of any played music and links them to other similar mathematical patterns through 'wavelength matching'? Thus, the robot plays the other similar mathematical pattern and may externally judge it as if the robot is playing music that corresponds to its 'feelings'? Does this make sense....?
How about if we test consciousness through poetry AND music? If the robot is able to understand the metaphors or the semantics of the words according to the context, and then is able to play music that is externally judged as appropriately depicting the emotions that the poem evokes, then the robot has a consciousness!
@Alfredo Pereira Junior: You might be interested to know that Colin Hales proposed scientific behavior as belonging to this class. He wrote a whole book about that, entitled "The Revolutions of Scientific Structure", largely inspired by Kuhn. In the same vein, Aaron Sloman proposed in a paper in 2010 that consciousness would eventually lead to the emergence of philosophical questions. Koch and Tononi developed their own variant of the Turing Test, based on anomaly detection in pictures, it was published in Scientific American. And the list goes on and on with different criteria (creativity, adventurousness,...), I'd be glad to share some references if you want.
Anyway, to get back at the discussion here, these tests, as you said, will always require human judgment. I agree with you when you say that emotions, feelings, should be considered more carefully; I believe they might play a key role in sentience. However, I have to agree with Masataka too that even emotions can be, theoretically, dissected functionally, and thus simulated by an appropriately designed software. And I think that is why he focuses his research on the detection of qualia.
So far, both views are valid because we don't have any proof in one sense or the other. It is mainly a matter of personal belief.
that's a Nice Idea...
But, as you said this is also possible through artificial implants that are not conscious.
I think one ultimate solution is this:
proposing the theory that "measures consciousness it self". a theory that measures consciousness regardless of its substrate, whether the substrate is a machine or a biological brain or....
I think attacking the heart of the problem is the key for it.
Masataka,
Turing had a much simpler idea. We constantly test each other intelligence by asking each other question and when someone (behind a curtain or behind a internet icon) is giving an intelligent answer we say: that person is intelligent. Turing simply said we should do the same thing with whatever machine that claim to be intelligent. This simple idea can be extended to test consciousness. Before we test other people intelligence we constantly test if there are conscious and the more alive and responsive and intelligent a person is and the more conscious we say this person is. It is what we do when discussing on RG. So lets do the same thing with a machine that someone claim it is conscious. If we are convinced it is than it is as far as we can judged. So lets apply the same test to machines and to humans. We do not even need to invent the test since it is built-in in our nervous system.
Dear Livia, you wrote: "Could you please expand on that? Do you mean that the robot 'registers' the waves of music played and then should try to play back a music that somehow corresponds to 'its feelings'?"
Alfredo: Not exactly. The robot's feeling is a wave that is structured like music (amplitude modulated waveforms). We insert in the robot a wavelike mechanism (e.g. a set of violin strings and a mechanism to play them, as in the link below) that allows it to have these feelings. Than we capture the signal and send it to:
a) Back to the robot's cognitive modules to let the feeling modulate cognition and behavior (the Toyota robot does not have this feedback); and
b) To an audio amplifier and loudspeaker, to let the human observer "hear" the robot's feelings (the Toyota robot plays a real violin that amplifies the sound of the strings and allows us to hear, but it is probably not conscious because these vibrations do not modulate its cognitive activities).
Livia: "Couldn't we potentially program the robot so that it picks up the mathematical patterns of any played music and links them to other similar mathematical patterns through 'wavelength matching'? Thus, the robot plays the other similar mathematical pattern and may externally judge it as if the robot is playing music that corresponds to its 'feelings'? Does this make sense....?""
Alfredo: Yes, it makes sense, but there is one ingredient missing in your idea: the valence of the feeling. For us, each feeling has a valence, it is good or bad (in many degrees). This association of feeling with appraisal is the result of our evolutionary process. For the robot, we have two options:
a) To pre-program the robot, by associating a class of feeling patterns with "good" and another with "bad", the same say we do (for instance, pay attention to the soundtrack of movies - the director uses specific musical patterns to convey specific feelings; all great composers are experts in this domain of associating musical patterns with feelings). Of course, "good" reinforces the cognitive/behavioral patterns that elicit the feeling pattern, and "bad" inhibits them;
b) To let the robot learn with its experience, using a connectionist architecture with a basic learning rule - all that causes lesions to the structure of the robot is associated with "bad' and all that increases its power/survival is associated with "good".
Lidia: "If so, then couldn't we potentially program the robot so that it picks up the mathematical patterns of any played music and links them to other similar mathematical patterns through 'wavelength matching'? Thus, the robot plays the other similar mathematical pattern and may externally judge it as if the robot is playing music that corresponds to its 'feelings'? Does this make sense....?""
Alfredo: Yes, provided that the robot really feels the patterns (if it is affected by the valence of the pattern)
Lidia: "How about if we test consciousness through poetry AND music? If the robot is able to understand the metaphors or the semantics of the words according to the context, and then is able to play music that is externally judged as appropriately depicting the emotions that the poem evokes, then the robot has a consciousness!"
Alfredo: maybe in the future, because the semantics of human language is very complex. It would be interesting to encode meanings in the same kinds of wave patterns found in music. Then we would have a unified "language of feeling" (sorry Fodor for my misuse of your "language of thought" thesis...)
https://www.youtube.com/watch?v=EzjkBwZtxp4
Dear Louis, the Turing test tests intelligence, not consciousness! If consciousness is basically feeling, it cannot be tested by means of cognitive measurements.
Dear Abolfazi,
> But, as you said this is also possible through artificial implants that are not conscious.
As in the video and briefly in the attached pdf, to overcome this problem I derive a way to connect the device to the brain. The basic idea is to let the device take care of a whole visual hemisphere and connect it to one of our own cortical hemisphere.
If qualia is experienced in the "device visual hemifield" with a full visual device, but not when the device is replaced with a look-up table that reproduces the brain-device interaction, we have to say that something special, say consciousness, have emerged in the full device.
Alfredo,
If you read my post I acknowledge that the ''Turing test'' test intelligence. I only extend the concept for consciousness. When I exchange with the icon on RG with the name Alfredo Pereira Junior , that icon seem quite intelligent and conscious. I can even make the icon angry sometime. So I am doing the test to this icon. Lets do the same with machine and if you tell me that you are one than I do not have to revise my judgement.
Dear Jonathan,
"You cannot test it, Masataka, because the golden rule of qualia is that they are always totally proximal. They are never available for confirmation other than where they occur."
My position is that two "potentially independent" streams of visual consciousness (e.g. split-brain) that are located in the two cortical hemispheres, are linked together with "thin connections" via the corpus callosum and the anterior commissure. This linkage leads to our seamless bilateral visual experience, without all necessary information for our subjective percept converging onto a single physical point.
So in the case of bilateral vision, where would be your single point, say "where they occur"?
@ Masataka, your comment just gave me an idea: THE THIRD EYE, which rests between our two eyes! Our third eye is somehow connected to the other 'thin connections' in our brain (thus, also called 'the mind's eye). Perhaps the third eye can be 'the single point' where our bilateral vision occurs, so that it is perceived by us as seamless? :)
Dear Masataka,
You have switched to a different point from the one addressed in the sentences of my you quote but I can answer nevertheless.
We know that signals in the right and left hemispheres lead to axonal branches crossing to the other side in a pretty 'thick' connection called the corpus callosum but that makes no difference to the fact that the signals on the right are on the right and those on the left are on the left. We have no reason to think that you can 'join up' experiences by sending a few secret signals sideways between them. Some structure on the right can be conscious of what is signalled on the right (including some derived from the other side) and some structure on the left can be conscious of what is signalled on the left but no structure on either one side or the other can be conscious as a (direct) result of signals on the other side and nothing receives both sets of signals.
The explanation in my model is much simpler. Signals dealing with right field objects send branches from left cortex to right cortex through corpus callosum, and vice versa for left field. So there will be cells in anterior cortical regions, probably distributed over a wide area, in both hemispheres, that receive signals encoding the objects in both hemifields. I think I have explained a long time ago that we have no reason to think that there is a unique or single point where subjective percepts occur. I think they percepts occur in vast numbers across a wide area. But each instance of the percept must include all the relevant information - so each cell must receive inputs from about 1,000 cells each encoding some aspect or element of the scene.
Dear Jonathan,
What I mean by "thin inter-hemispheric connections" is that the low-mid visual areas are only connected at the vertical meridian. You need to go up to TE, TEO to have full hemifield connections. Then again, at TE, TEO, the visual representation is abstract, and most probably not rich enough to support our conscious vision.
My understanding is that, in your view, you do need at least one neuron that collects all necessary information from the two visual hemifields to explain our bilateral visual percept. Is that still true?
Dear Abolfazl
"proposing the theory that "measures consciousness it self". a theory that measures consciousness regardless of its substrate, whether the substrate is a machine or a biological brain or...."
My exact aim is to come up with the right theory, but my position is that we do need to test it experimentally. Do you think that we can pinpoint the correct theory without experimentally testing it?
Masataka,
in the zombie theory is an error. There is no evidence that a zombie is possible..
Masataka,
if you are interested, read my similar question on RG:
https://www.researchgate.net/post/What_could_be_proof_of_consciousness?_tpcectx=profile_questions
Louis,
your suggestion is a nice indicator for everydas-situations but not a valid test in science. We never can be sure, that an intelligent behaviour is induced by consciousness. In computers we find intelligent programed programms ...
May be that MRT or EEG is able to find neuronal activity in the brain, but we do not know the signature of consciousness.
We have to accept that consciousness is a subjective phenomenon and is not measurable. May be we will find a new paradigma in science.
Dear Wilfried,
"in the zombie theory is an error. There is no evidence that a zombie is possible.."
So do you mean that its existence is debatable, or are you suggesting that there is logical proof that it cannot exist? I am aware of controversies, but I don't know of any established argument that completely denies its existence.
Dear Masataka,
I now realise that you were referring just to the 'stitching' at the meridian in early areas. But does not change blindness,like the gorilla in the basketball game, indicate that we are not conscious of this early array of data?
Surely we are conscious of a very much abstracted picture of the world, containing objects and their hidden amodal aspects and shadows, movements and even causal powers. I think it is a false extrapolation from what we know about cameras that makes us think that we perceive some sort of pixellated array. Surely the whole point of the abstraction mechanism is to extract salient relevant abstract patterns from input so that we can compute useful behaviour in response to such abstract conceptions. What would be the use of being aware of the fragmented data elements that will give rise to the useful conceptions? Unless of course we were wanting to detect errors in our abstractions under poor conditions like fog, for which we might have a mechanism for favouring low level abstractions getting through to awareness rather than high level ones - but still abstractions, like 'a black shape'.
And as I have said many times, all the evidence seems to be that the richness of a percept, in terms of the number of bits of data needed, is easily within the scope of a neuron (100-10,000). These bits will have nothing to do with pictorial grain, except maybe in the error detecting situation indicated above, and even then not raw grain. Pretty much everybody else I know in neuropsychology seems to agree on this.
So yes, at least one neuron needs to get the maybe 5,000 bits of information you need for a scene - and that fits the anatomy quite well. It even allows for tenfold redundancy if we think individual synaptic spines are too crowded together to have individual semantic significance. The more I keep saying this to people the more I wonder why anyone ever thought there was a problem.
With reference to zombies, I think there is a clear argument that denies their existence if you take Chalmers's original definition, although it is possible to reformulate a conceivable zombie if one really thinks it worth it.
Our definition of physical is something that gives rise to certain specific experiences if you put a human mind in an appropriate position at an appropriate time to register these. So a cup is physical because it makes people see a cup or feel a cup when they are in the right position. The Higgs boson in physical because people at CERN had experiences of certain collision images statistically more often than they should if there were no HIggs bosons, etc. There is no other way to define physical.
A philosophical zombie is defined as phsyically identical to a human but with no experiences. Now physically identical to a human must mean that if you put a human being in the right place they will get certain experiences caused by the presence of the zombie - they may hear it speak or see its convincingly human body language. But this must, unless we redefine physical, mean that if the zombie stands in front of a mirror we have a total set up that is 'physically identical' to you or I in front of a mirror, which we have said entails getting an experience, yet there is no experience. So there is something wrong with a definition of physical that entails experience. But we have no other option that is not circular. SO Chalmers's proposal is self-contradictory.
If we say that the laws of physics are inconsistent - that experience is not always entailed, we have a way out, but since experience is always proximal and can never be identified in the distal we have a non-scientific, i.e. untestable, and also unparsimonious, definition of physical. I am tempted to say why bother.
Dear Jonathan,
"So yes, at least one neuron needs to get the maybe 5,000 bits of information you need for a scene"
If you have 100x100 pixels which only takes two values each (e.g. black and white), that is already 10,000 bits. I believe that we can do much better than that in terms of perceptual discrimination. Yes it may be compressed, but the input to this particular neuron will change depending on compression strategies (e.g. coordinates for black pixels, coordinates for white pixels, particular shapes, etc.).
Where does your bit estimate come from? Do you assume that your "single dendritic tree" can cope with different compression strategies in generating visual experience? If so how would that work?
Wilfried,
There are clear documented events where people due to some momentary brain problem acted almost normally but were totally absent and so were momentary zombie. I am very often zombie-like when I walk and is totally consciously oblivious to the environment and think about some problems. I am into another place that the place I am. The zombie-me is there walking in auto-pilot on the sidewald and the conscious-me is somewhere else. And we do most of our actions in zombie-mode. When I type this on the screen, I am totally unaware of the movement of my fingers and if I would pay attention I would loose the confidence and impair my typing skill which is mostly a zombie/automated activity only superficially control by my consciousness which pay some attention to the word but is paying more attention to what I am trying to say which belong to a fuzzy world , and the words coming out more or less feel right about this fuzzy world.
''your suggestion is a nice indicator for everydas-situations but not a valid test in science.''
Why? I discuss with you and your and all I have seen from you are an icon and some word that were in your posts. Based on that alone I am knowing you a little bit and I am convince that your are conscious and intelligent. If you write to me that your icon is really a picture of you because you are a replicant, am I allowed to reverse my judment? No. It would not be scientific to do so. We do not understand what is it to be intelligent and what is life and what is to be conscious but we all know it exists and we know it because it is built into us, we do not know why either and if we would not been able to distinguish that which is alive from what is not, that is intelligent from what it is not, that it is conscious from what it is not then we would not even try to understand that. The test is built-in and so we can use it. As we can use a voltmeter without understanding how it was built.
Dear Masataka,
We are simply not dealing with pixels. Marr saw that. Barlow saw that. Blakemore sees that. All the neuropsychologists I know see that. Pixels have nothing to do with it. Drawing programmes like Power Point do not use pixels for encoding an image. Only the screen has to turn it into pixels. But there is no need for that in the brain because the data are not going to be accessed optically.
Dear Jonathan,
Sorry for the ambiguity, but I am not talking about pixels as a foundation of visual percepts. I thought it would be quite clear.
I am assuming a 2D array of rectangular cells (100x100) which functions as a coarse visual display. Let's say that the cells are bounded by horizontal and vertical lines. The 2D array of cells may present random-dots, shapes, etc., and depending on what is presented, optimal compression strategy would differ.
We will have crisp visual experience of them, down to sub-cellular resolution (which means single cells need much more than 2bits, even if its only black and white), and may conduct discrimination tasks on them.
If you don't compress, you have much more than 10000 bits. If you do compress, the code that arrives onto your single dendritic tree would drastically differ depending on what is presented. Compression strategies in the brain would correspond to neural representation with neurons of various preference (e.g. on center, line edges, curvature, T-junctions, shape alphabets, faces, houses, etc.)
Given this situation, how could PSP states in a single dendritic tree lead to robust visual experiences of all sorts of various stimuli presented on the coarse display?
Dear Masataka and Jonathan,
Of course single neurons are not sufficient to instantiate visual or any other experience. It takes billion of neurons and astrocytes, as well as neuro-astroglial interaction and cross-modulation to have both definition (crispness) and integration.
Louis,
I agree with most of your answer. Our mind is able to handle communication with other conscious agents, most of them are persons but also animals are no problem.
Your contribution is very pragmatical but I fear this argument is not common with scientific standards which means that not a subjective but objective indicator is necessary. There is no idea on the planet what kind of indicator is valid.
Masataka,
in the popular media we find a wave of zombie themes. Zombies are very common.
I think it is very tricky to think about zombies but there is a logical problem. A zombie is acting like a human but in his mind is no qualia and no consciousness.
1. There is no evidence that zombies exists.
2. If an non-conscious agent is able to act like a conscious one, nature needs not to develop consciousness in the evolutionary path because this feature has an unjustifiable energy consumption.
3. If ambivalent decisions are to be made, a zombie is not able to decide like a human. His decisions have to be random decision what makes his decisions arbitrarily. Here we can find a difference to human reactions, which are not arbitrarily (logical contradiction to the existence of zombies).
5. The adoption of a zombie is not effective. If one takes seriously the argument of the existence of zombies which does not need consciousness, we do not need consciousness at all. But we obviously have an consciousness, therefore can be no zombies.
But Masataka, you are still talking of pixels - or at least grids of rectangles, which is the same. I think most people in neuropsychology accept that conscious percepts do not arise from sets of data like this. If I see five red roses I assume that somewhere in my brain is a juncture that is receiving signals that can be integrated rather like words (to quote Marr and Barlow). There will be a signal for 'rose-shape', a signal for a certain red to apply to rose shapes and a signal to indicate that rose shapes are in quintuplicate. If you ask how 'sharp' this image is we can add a signal for 'very crisp' or one for 'rather blurry'. Clearly we are going to need to do better than that bit 100 bits will give us a million shape options and 10 bits will give us a thousand colour options and if we have a cell that can take in 5000 bits we have quite a complex scene, which could contain maybe up to five object token signals to which properties could be ascribed.
That would be roughly how Power Point would do it - you click on shapes and fill and copy and paste and all that needs to be stored are the codes for these manoevres.
And Alfredo, your statement is, as you well realise, totally and rather uncharacteristically unsubstantiated by any argument. As we have discussed over many years integration, if it requires anything, requires some sort of dynamic compresence in an event and in neurology post synaptic integration (the key word) is the only form we have. And crispness does not need lots of data. One click on a square in Power Point gives you a crisp square. It is fuzziness that requires more data - to have intermediates. When I walk around without glasses everything is crisp. If I then put glasses on everything is crisper and only at that point can I introspect fuzziness if I take the glasses off again. Seeing fuzziness requires more careful attention to detail. Unlike a screen our visual awareness has no constraining grid. To get crispness all you need is a signal for 'edge' from a Hubel and Wiesel cell. There is no piece of paper on which to draw that line thick or thin. It is just an edge idea.
I realise that what I am saying may sound strange but I can guarantee that everybody present at a seminar in the Institute of Philosophy Censes (neuropsychology/ philosophy interdisciplinary) group would agree with what I am saying about how percepts are likely to work. They may not think in terms of single cells but the other stuff is standard.
Wilfried
Why do you think Turing came up with his test? Because the only way to be objective about this is to be consistent. The certainty of our own consciousness is based on its own existence and its necessity for the assertion of everything else. It is objective in the sense that it is an intersubjective necessity. A communication is objective if its interpretation is not ambiguious and so everybody understand it in the same way. A message such as ''I am conscious'' when explain sufficiently by a context is unambibiously understood and in that sense is objective but it is not an explanation in terms of lower explanations. The Turing test, which is simply what we do all the time here on RG when we consider each other conscious and intelligent based on our communications is objective in the above sense. Now suppose that somebody come up with a consciousness apparatus. A really objective test in its operation. One that we understand in its operation. And one that we can apply here on this thread in order to determine who is conscious and who is not from their posts. It would mean that there is an inverse function from set of words to a verdict about the consciousness of the entity uttering these words. I do not find this likely. When we judge ourself, our judgement is based on the same type of body and the same type of enculturation and through empathy are able to indwell into the source of these word and meet yes or no something of the same nature of the indwelling process being active.
Dear Jonathan,
Sorry for the lack of argumentation in my post. I have argued in the past that integration occurs not only at dendrites, but also in the astroglial network. I thought I could take for granted that you had this argument in your mind.
The scientific community probably agrees with the basics of Tononi's information integration theory, implying that cooperation of several brain regions containing billion of cells is necessary to have a conscious experience. You cannot compare this "integrated information" with storage of information using a binary language in in a digital computer, because the computer is not conscious.
There is something in conscious experience that requires a continuous wave - the feeling. It is like the hare having to cross the continuum to reach the tortoise - although Zeno was not right in claiming that the hare would never reach the tortoise, his argument shows that an experience in the continuum requires a lot of stuff to happen.
Just an analogy, because I do not have a quantitative argument.
Wilfried,
For Dilthey, Hodges argues, "it is a fundamental characteristic of mental life that in one way or another it expresses or “objectifies” itself: " and "expression is the medium through which we know other minds.” He offers the following instance:
''. . . I see a human figure in a downcast attitude, the face marked with tears ; these are the expressions of grief, and I cannot normally perceive them without feeling in myself a reverberation of the grief which they express. Though native to another mind than mine, and forming part of a mental history which is not mine, it none the less comes alive in me, or sets up an image or reproduction of itself (Nachbild) in my consciousness. Upon this foundation all my understanding of the other person is built.
.....This power of expressions to evoke what they express is the basis of all communication and all sharing of experience between human beings. It is not an inferential process. When I see the stricken figure I do not begin by recognizing the attitude as the attitude typical of grief, and conclude from this that the person before me is experiencing grief. The mere sight of the expression awakens in me an immediate response, not intellectual, but emotional, feeling arouses feeling with no other intermediary than the expression itself. Dilthey remarks that what happens in me on such an occasion is the same as what happens in the other person whom I understand, only as it were in reverse. In him a lived experience has externalized itself in an expression. in me, a perceived expression has internalized itself in the shape of a Nachbild of the experience expressed. Guided by the other person 's expression, I live over 'again (nacherlebe) his experience in my own consciousness, and this is the essence of understanding. “To reproduce is to re live" (Nachbilden ist eben ein Nacherleben).
.....When I thus re live someone's experience, the Nachbild of his experience in my mind both is and is not a part of my own mental history. It is, in the sense that it is I who am conscious of it, it belongs to my unity of apperception. It is not, in the sense that it is not my personal response to circumstances affecting me personally, but a reflection in me of someone else's response to circumstances affecting him. It is, so to say, distanced from the stream of my own life, eingeklammert or bracketed off', and ascribed by me to the other person. This again is not an act of deliberate judgment. I do not begin by observing the presence of a feeling in my mind and then judge that it is a reflection of something in his, but it is immediately projected and perceived by me as his. This projection Dilthey calls a “transposition of myself “(Uebertragung, Transposition, Sichhineinversetzen). It means perceiving the other person as possessed of an inner life essentially like my own, and so “rediscovering myself in the Thou " (das Verstehen ist ein Wiederfinden des Ich im Du). (Hodges, 1949, 14-15)
Source: http://c-cs.us/configuring/Dilthey.html
Dear Alfredo,
I realise that most people think consciousness arises from millions of cells at once, but I doubt that has much to do with Tononi. Most neuroscientists I know are not sure they can make much of Tononi's theory. On the other hand most neuroscientists will also admit that they have no firm opinion on which or how many cells are involved. My main point in the last post was that they ARE pretty sure that percepts do not arise from data in grids of the sort Masataka was suggesting.
I have recently been reading two books. The first is Buzsáki's Rhythms of the Brain, which talks a lot about 'waves'. The second is Dscartes's Passions of the Soul. To my mind the second is on much firmer ground when it comes to basic dynamics. I would recommend the Passions to everyone interested in neuroscience. Descartes is remarkably close to the truth in a surprising number of areas. As long as you interpret 'subtil fluids' as neurotransmitters and cations and 'pores' as gated ion channels you get a pretty good story. Descartes even has a version of Hebbian reinforcement by facilitation of pores. Moreover, he sees the basic point that conscious percepts are 'passions' in the sense of the opposite of actions - in other words, receivings. His main mistake is to confuse 'unity' with 'uniqueness'. Intriguingly, I have discovered that this error was pointed out to him by a correspondent under the pseudonym 'Hyperaspistes, in 1641.
In contrast, I think Buzsáki veers off into pseudoscience, trying to relate consciousness to rhythms.
Simply, we can test machine(device) consciousness by showing that the device has the will to survive.
When faced with an environment that will inevitably result in the discontinuation of the function of the device, will it take action (fight) to extend the duration of its ability to influence its environment for its own survival.
Yes, I know this isn't really a test for 'consciousness', per se. It is more of a test to see if a device can exhibit a much more fundamental characteristic that is a key defining part consciousness. I believe that any device must exhibit this characteristic first in order to have the potential to develop into anything resembling human consciousness.
I deeply support the suggested experiment into expanding a human consciousness with complex, possibly self sentient, equipment. However, it seems clear to me that until that equipment shows the need to survive when disconnected from the human, it will not be accepted as being conscious in itself.
In my opinion, this experiment, while surely worthwhile, is not directly related to any consciousness research, in the near term. The attempt to utilize the word 'consciousness' in the description of the effort is at best misleading and at worst deceitful. It merely serves to attract the attention of neuroscientists and philosophers, both of whom deal with consciousness peripherally, the former denying it and the latter obfuscating it. Of course if I am overlooking a need to attract funding, by all means, carry on. (Oops, my generous cynicism is showing.)
Louis,
your argumentation is very interesting. I can follow it without any problems. Dilthey means understanding of other persons. 100 years ago was this thought new and a kind of revolution but it is only a romantic idea. A person is very complex so it is not possible to have a model of it in the mind. Naturally we model some aspects but never an entirely person. If we would have, we could read the others thoughts.
You argued with the Turing Test. We have had such discussion some month ago. I would like to remember to the simple program Eliza (Weizenbaum). Today you can download such a chatbot (http://www.codeproject.com/Articles/13136/Chatterbot-Eliza) and talk with it. The answers are not so stupid but there is nothing than a simple algorithm, which simulates a conversation in the style of C. Rogers. Many people prefer to communicate with the program, as with other people, because they felt understood by the program. In such situations our innate indicator of intelligence and consciousness has deceived us. Pet owners sometimes makes a similar mistake when they speak with their dog or canary. The reaction of the animals make them believe that their pet is understanding what they are talking to them.
These simple examples make me cautious when it comes to human judgment. I therefore believe it is quite understandable when we look for a more objective indicator.
A robot as an autonomous artificial agent like an animal must have consciousness. I am sure that a really autonomous robot must have necessarily qualia and consciousness. If not, it will be a stupid toy like the robots today.
Wilfried,
Do you really think that we can be like prometheus and transfer the fire of Zeus to them?
Wilfried,
Lets assume that like Dr. Frankeinstein, you discover the secret of consciousness (life). Would you dare building a robot which is conscious, feel pain and want to live? What a nightmare!
Dear Jonathan,
"My main point in the last post was that they ARE pretty sure that percepts do not arise from data in grids of the sort Masataka was suggesting."
I thought I made it VERY clear as below that, the "grid" is only an example environmental stimulus and totally apart from how the brain represents it internally.
"I am assuming a 2D array of rectangular cells (100x100) which functions as a coarse visual display."
In relation, how about my following main point about the dynamic nature of representation in the brain.This is the main problem I see in your hypothesis which claims that "acoustic modes" generated by post-synaptic states within a single dendrite is sufficient for qualia of all senses.
"If you don't compress, you have much more than 10000 bits. If you do compress, the code that arrives onto your single dendritic tree would drastically differ depending on what is presented. Compression strategies in the brain would correspond to neural representation with neurons of various preference (e.g. on center, line edges, curvature, T-junctions, shape alphabets, faces, houses, etc.)
Given this situation, how could PSP states in a single dendritic tree lead to robust visual experiences of all sorts of various stimuli presented on the coarse display?"
For example, the internal representation of random dots would differ from familiar objects due to the lack of "high-level templates", but in both cases, our subjective experience is clear and crisp.
Dear Wilfried,
"The adoption of a zombie is not effective. If one takes seriously the argument of the existence of zombies which does not need consciousness, we do not need consciousness at all. But we obviously have an consciousness, therefore can be no zombies."
I understand your view as one possible standpoint, but if you take into consideration for example, Epiphenomenalism, I don't think it stands as a general argument. What is your take on it?
Dear Masataka,
I think you have answered your own question in a sense. The Marr/ Barlow approach assumes that as we go up the inference hierarchy in perception pathways we have cells with more and more abstract 'meanings'. We go through shapes and objects and end up with cells that have a level of abstraction akin to words in language. To reconstruct a percept from these we simply need signals from each 'word cell' to converge on each of our experiencing cells. The experiencing cell then has maybe 5000 inputs that taken together describe the scene.
As I indicated, crispness is not the problem. crispness will be the default in a simple encoding. You need more bits to encode fuzziness. What I think you mean is complexity. But even complexity can be signalled simply in words. A Fourier transform can signify certain sorts of complexity very simply.
The problem with thinking of the richness of the scene in front of us that we seem to experience is that every time we ask ourselves if we can see this or that detail we can cheat and get our brain to feed that detail through to experiencing cells. If experiences occur every 25msec or so then we will be unaware of just how much shifting of attention and focus occurs in real time. If we are presented with a scene briefly we are appallingly bad about reporting detail. That may be unfair if the exposure is too short for signals to be analysed as much as normal but in fact we are very bad even if we are shown a scene for several seconds and then asked to report it after it has been hidden.
There are serious issues about reporting experience which most neuropsychologists agree suggest that we have to accept that actual experiences are very sketchy. Victor Lamme disagrees but I think there is a problem in assuming that everything retrievable from very short term visual memory has 'already been experienced'. I think it more likely that there is a rich buffer of data that can be re-mined to generate internal images that may not correspond to any specific 25msec percept that has been had while the stimulus was available.
Dear Jonathan,
You didn't directly answer my "dynamic nature of representation" question, but I think you are implying that our sensory qualia is all dependent on abstract level "word cells" and don't have access to anything lower. And at this level, a fixed number of "words" feeding into a single dendrite, 10000 at maximum, is sufficient for conscious experience of all modalities. Is this true?
Dear Frank,
My understanding is that the most fundamental problem of consciousness is "qualia". Perhaps we don't share this view? This is at least, how a large portion of neuroscientists interested in consciousness tackle the problem. We use visual illusions that render stimuli invisible (e.g. binocular rivalry, backward masking), which has nothing to do with survival etc. When I say machine consciousness, I am also talking about qualia. I come from this school of neuroscience and you can see my standpoint in this year's SfN abstract.
Visual backward masking in rats: a behavioral task for studying the neural mechanisms of visual awareness
*M. WATANABE1, N. TOTAH1, K. KAISER2, S. LÖWE2, N. K. LOGOTHETIS1; 1Logothetis, Max Planck Inst. For Biol. Cybernetics, Tuebingen, Germany; 2Univ. of Tuebingen, Tuebingen, Germany
Abstract: The neural mechanism of visual awareness has been primarily studied by contrasting neural activity between visible and invisible stimuli, in attempt to unveil the necessary and sufficient condition for neural representations to enter conscious vision. Visual illusions that render stimuli invisible (e.g., binocular rivalry, backward masking) are prominent behavioral paradigms.
So far, majority of studies on visual awareness have been conducted on human and non-human primates. Although these studies greatly contributed to establishing specific brain region-dependent modulation of neural activity by awareness, the field would benefit from being able to conduct experiments on rodents. This advance would provide access to modern techniques such as optogenetic manipulation and two-photon imaging, etc.
Here, for the first time, we report backward masking in rats. Backward masking is a visual illusion in which a target is rendered invisible by a visual mask that follows the target with a brief stimulus onset asynchrony (SOA). We first developed a head-fixed rat spherical treadmill system that is amicable to rats performing visual tasks with low contrast, short duration stimuli, which are required for testing backward masking.
Rats were initially trained to discriminate a “go” target (vertical grating: 0.15cpd, 28deg visual angle) and a “no-go” target (horizontal grating: 0.075 cpd, 28deg visual angle) without the visual mask. They responded either by running or staying still on the treadmill during a brief time-window after stimulus presentation and were rewarded with drops of water for running in response to “go” target and punished with time-out penalty for running in response to a “no-go” target. Duration and contrast of target stimuli were gradually reduced to experimental parameters for the backward maskingexperiment (duration:16ms, luminance contrast:15%). After achieving threshold performance (d’ >1.5), backward masking experiments were conducted with SOAs at 16, 33, 49, 66, 83, 99, 116 ms. Plaids were used as visual mask (duration:33ms, luminance contrast:95%).
In all 5 rats, smaller SOA led to statistically non-significant differences between hit and false-alarm ratio. In contrast, difference between hit and false alarm rate were significant for larger SOAs. Threshold SOAs at which masking occurred varied across rats (range: 33m -66ms). In conclusion, a visual stimulus can be rendered invisible with short SOAs, and hence, backward masking can be used to study the neural correlate of consciousness in rats.
Dear Masataka,
I was not sure what point you were raising with the 'dynamic nature'. To me dynamic nature means considering everything as links in a causal chain so for every group of signals forming a representation we have to define what is influenced by all those signals. It also might imply that our experiences are constantly changing at a rate that we cannot introspect.
But yes, experience would arise in cells fed purely by 'word cells', but note that words can cover things at all levels of abstraction. So there is no difficulty in encoding what we think of as raw data - like a red dot - as well as a more abstract concept like a jumping horse.
I agree with your emphasis on qualia, Masataka. To me that is what consciousness is about, or at least it is what is interesting to study, rather than complex responsive behaviour. The rat study looks neat. I guess we would have predicted that rats would have representations that can be back masked out but it is interesting to see that you can show that there are such representations, defined in terms not of reporting but of 'strategic' responses in a non-primate.
This seems to emphasise the fact that we have to define the qualia we talk about as belonging to those representations that have this sort of relation to strategic rather than 'reflex' or 'habitual' responses. We have to accept that an equivalent category of 'feels' might be associated with representations at all sorts of other levels in brain processing but since these are not hooked up to strategic responses that we can reasonably bootstrap as being closely equivalent to what we can report in ourselves nobody will ever know.
The key argument I keep coming back to is that if we are defining these representations in terms of their forward capacity to link to behaviour, which we are, then we have to have some idea where the representation 'links forward' - i.e. where it is received as a pattern and at what level of richness. The difficulty we have is that we have to rely on some sort of internal monitoring system to report what we think this richness is and this monitoring will always monitor what is in a buffer store derived from the 'link forward' and available for recycling and we have no way of knowing whether this buffer store has the same richness as the pattern in the representation.
Dear Jonathan,
What I meant by "dynamic" is only necessary if you assume that the hierachical levels of neural representation capable of sufficiently coding the stimulus differentiate depending on the nature of the stimulus. But if you assume that only the "abstract" level feeds into a consciousness mechanism, like in your case, you don't have a problem in this sense.
But in visual neuroscience, it is now quite established that the level of neural representation do differ depending on the stimulus. This comes from models like hierarchical generative model, or predictive coding that, if there exists a high-level template for the current visual object, it will be "explained away", but if not, the lower levels take over. There is mounting experimental evidence supporting this view, as in the now classic study by Murray et al.
Shape perception reduces activity in human primary visual cortex
Scott O. Murray*†, Daniel Kersten‡, Bruno A. Olshausen*§, Paul Schrater‡¶, and David L. Woods∥**, PNAS 2002
OK, but I did not say that only the abstract level feeds through. I am familiar with the argument relating to generative models. But I am suggesting that the coding at the level that feeds through to reporting and strategic behaviour that we monitor as 'our experience' is supplied by 'word' signals that can represent both low level and high level data. (In fact these would not be 'words' but atomic propositional units of a sort that external language does not use but that is another story.)
I am actually a little sceptical that what we tend to think of as raw data really are raw, rather than even more sophisticated abstractions, because they tend to require careful introspection - 'jumping horse' may come to our attention more immediately than red dot.
Louis,
there is a big gap between knowing and doing.
There is a much more deeper gap between wanting and knowing.
smile.
But I will struggle on.
Wasataka,
in Epiphenomenalism all mobile organisms are automates. Humans are like zombies with an unnecessary function (qualia and consciousness).
I deny such a position. This philosophical position is the counterpart of all what makes the grandeur of the human race. It is the old idea of the L'homme machine "Machine man" or "The Human Mechanism". (Julien Offray de La Mettrie, 18th century).
Of course, by this point of view the problem of consciousness is tricky circumvented. It is much more difficult when you look at experimental data of brain activity. When the effect is registered the conscious experience is delayed, when an action is initiated. But all experiments must be interpreted. They will tell you nothing about quale or consciousness. Here will be waiting a lot of work ...
Wasataka,
"My understanding is that the most fundamental problem of consciousness is "qualia"."
I agree.
Wilfried,
If one day you build a machine and then it starts laughing then there is a good chance it is a conscious machine. See Essaie sur le rire by Bergson.
http://www.authorama.com/laughter-8.html
Bergson associated the comic to surprising event where our consciousness detect the purely mechanical aspect of our action. A conscious machine would thus be laughfing at its own mechanical way of doing things.
Louis,,
thanks for your answer. It is also a good argument against zombies - they are not able to laugh because they do not understand (!) humor.
Wilfried,
The definition of zombies contains in its premice that the zomby behave like other humans but is totally unconscious. The definition contains as premice that consciousness does not make any exterior behavior different. I grant them that if in reality it was the case then I would not see the point of consciousness to have evolved. But they did not make the case that consciousness is doing nothing for behavior. It is an assumption in their definition. It would be stupid to accept this definition and try to argue against it. I think that the only people that are zombie are those people that are in a comas state lying in a bed. The rest of us are 99.999% zombie, because we are 99.999% unconscious of all the details of what we are doing and 100% aware of the .001%. but it is the key part of what we are doing, remove it and we fall on the coma table.
Louis,
what a funny answer.
If you have read my last posts, you know, that I totally agree. The zombie argument is totally stupid and an insult to all humans.
Wilfried,
Yes, the conscious robot will have to be a laughing robot otherwise it will be an zombie robot and consequently a stupid robot. It pay respect to consciousness, whatever it is, by thinking it has a role in nature in general and in the future of robotic.
Regards
Dear Jonathan,
Thanks for the comments on my rat study. I am thinking that the real fun starts from here with all the modern tools. Especially, the freedom of experimental manipulation of neural activity (e.g. shutting down specific topdown influences from area A to area B) is very attractive to me in terms of testing various hypotheses on NCC.
"But I am suggesting that the coding at the level that feeds through to reporting and strategic behaviour that we monitor as 'our experience' is supplied by 'word' signals that can represent both low level and high level data. (In fact these would not be 'words' but atomic propositional units of a sort that external language does not use but that is another story.)"
I am a bit lost now. How do you differentiate the axis of "highlow level" and "wordsnon-words"?
Dear Jonathan,
I was wondering how I can relate your hypothesis to my test.
Since you only need a single dendrite, it would mean that one cortical hemisphere is sufficient for full bilateral vision, as long as the PSP state of your single neuron is preserved.
That would mean that when I do my "full-visual-device" "look-up table" comparison, your hypothesis would predict that the two does not make any difference, because the latter preserves all device-brain interaction, resulting in identical neural states in regard to the biological hemisphere.
Of course, this is given that we do manage to generate qualia in the device visual field.
What do you think?
BTW, I have a more "doable" version of the experiment, as in the last part of my talk at the Redwood Center, UC Berkeley.
https://archive.org/details/Redwood_Center_2014_04_30_Masataka_Watanabe
The basic idea is to apply the proposed test on two biological hemispheres, where we may assume that both are "conscious". As in the video, we will artificially rewire inter-hemispheric connectivity in split brain animals to totally monitor and manipulate inter-hemispheric neural interaction. Investigating connectivity conditions which regains bilateral percept (e.g. capability of conducting bilateral matching tasks) would provide us evidence to see which theories are right and which not.
We will start pilot experiments on rodent bilateral matching in the following weeks.
Masataka,
If a proposed "conscious machine" could be presented with visual mazes like those below and successfully tell, without employing a stylus of any kind, which it could exit and which it could not, I would be inclined to to think it an example of machine consciousness.
Dear Arnold,
Welcome!
But I don't understand how solving a maze would prove anything.
I basically take the semi-hardcore position that, a machine achieving anything, tasks, objective behavior, etc. would not prove consciousness. Therefore I propose my test, which claims that we need to connect the "machine of interest" to our brain and "see".
I am "semi-hardcore" in the sense that animal behavioral experiments can also provide circumstantial evidence that something very strange is happening within the device.
For example, if we connect an artificial cortical hemisphere to an animal's biological cortical hemisphere with "thin inter-hemispheric connections" and if the animal is capable of conducting a bilateral object matching task (artificial vs biological visual hemifield) with more precision than what is transmitted between the brain-device interface, we have to say that something very funky is going on in the device. (for further details, please read the added comments in the above "main question")
Dear Masatake,
I like your proposal. But I am not sure it proves anything. (I am not the detailed expert on brain anatomy though, so excuse me if I miss out on some of the important details). My reply is more general. I wonder whether you think it is a 'brain' that has consciousness, or a 'person'. If we should take the position that it is a person that has consciousness (which to me is the ony way that makes sense - see e.g. the nice story I am John's Brain by Andy Clark in pointing out the mistake that we continuously keep making, see link), then given your set-up, "who" would it be that has the qualia experienced, if they are experienced by the test-person? (The question already gives the answer, does it not?)
Put differently, suppose I have a blind-man's cane and I 'feel' the pavement with the tip of the cane (to take a famous example), would I then need to claim on the basis of your reasoning that the cane is somehow conscious? Even if we have two canes, and the one in my right hand generates a sense of the pavement, while the other, cane-2 does not (I only keep feeling the cane itself and not the world as touched by cane-2).
On the other hand, it does make sense to me to claim that cane-1 indeed became part of "me" and therefore it is "me and the cane together" that experience what is experienced, as opposed to cane-2.More on which below.
I guess I never saw a clear answer to Jonathan's very first reply in which he started of by saying that you cannot prove qualia: if you induce qualia by sticking an electrode in your brain, then it is not the electrode that is conscious, and it doesn't matter what material the electrode is made of.
What I wanted to add to that argument however is the possibility that a human, living body in its environment 'appropriates' a device and is able to be conscious with/through that device of the world. (Remember that consciousness is intentional: it is not just a 'state', it means experiencing a world. It makes no sense to me to talk about consciousness as if it is, say, a light being on or off, without incorporating the fact that the light shines onto something and thereby discloses a world that we are 'conscious of').
So, if you can prove anything it will be about what kinds of devices can be appropriated by living beings to become part of the overall system - that necessarily contains a whole living human being, the test subject - which sustains conscious states. That in and of itself would be extremely interesting and powerful empirical work. I believe it would touch upon the question of what kinds of technologies are able to fundamentally and deeply 'couple' to our own embodied living system (in such a way as to become part of it), as opposed to technology that somehow remains separated 'from us' and can only be dealt with by us as an 'object' in the world.
But the test would never, on my view, prove anything about the coupled device having consciousness 'all of itself', just as it doesn't make sense to say that our brain is conscious, or our left hand is conscious, or our gut is
(I guess I agree with Louis in the end that the only way to test for that is to have the machine telling you that it is conscious, and simply not taking no for an answer;-)
Masataka: "But I don't understand how solving a [visual] maze would prove anything."
In this task, the machine must:
1. Understand what "You" means.
2. Understand what "here" means.
3. Locate its heuristic self at a coordinate within the open center region of its internal representation of the maze.
4. Understand what "exit" means.
5. Navigate its heuristic self within the pathways between the barriers that are represented internally until it either finds an exit or determines that there is no exit.
This is how humans solve the visual maze. And, crucially, the solution depends on the cognitive brain mechanisms of consciousness/subjectivity (a retinoid system). So if a machine could solve the visual maze, I would take it as evidence in support of "machine consciousness".
Dear Jelle,
I think it is important to note that Andy Clark distinguishes between the functional consciousness that he attributes to a person and can be considered in 'extended mind' terms and phenomenal consciousness, which is what Masataka is addressing. Andy has said quite clearly that in the case of phenomenal consciousness, and 'experience' is probably a better word, he thinks it belongs to some very small structure deep inside the brain. So Andy's story is about something else - the 'system' that sets up the machinery for making the inferences and collations that will feed in to the point of phenomenal experience deep inside.
Dear Jonathan,
Do you mean Block's distinction between access vs phenomenal consciousness? I was talking about the latter. Do you remember what paper it was Clark says what you claim he said? Would love to read about it. To me, saying experience - in the 'hard' way - belongs to a small structure deep inside the brain still doesn't make sense. If only to ask what you mean exactly with 'belongs to'? Anyway good luck with the experiment!
Dear Jelle,
It's not my experiment, but never mind! Andy said this at a meeting in Edinburgh I was at about four years ago. I think he tends not to put it in print, maybe because it tends to open a can of worms about extended mind. It is not about Ned Block's access/phenomenal. It is about something quite different that Whitehead put very well. Human consciousness is, he thought, special for two reasons. One is that we have a very complicated nervous system capable of abstracting from incoming signals in ways that no other animal has. This allows for the setting up of very sophisticated ideas to be fed in to 'occasions of experience' in our heads. The other is that the occasions themselves are probably very sophisticated in their way of receiving (or prehending) the complex incoming story.
In this context we can use 'consciousness' as a property of two sorts of entity. If we consider all the processes necessary to get to the experience then we include the whole nervous system and, in extended mind, even the walking stick and Otto's notebook and we say this is a property of a complex system called a person. If on the other hand we are considering consciousness as the 'having of qualia within an occasion of experience' then we have every reason to think, and Andy agreed, that we are talking of a property of a very small domain within brain tissue, which as Andy said 'would have enough bandwidth' to accommodate the sophistication of the ideas involved.
Andy Clark may have put this in print but this was a meeting where people like Bill Seager and Ken Aizawa were homing in on the inner detail of perception and how you relate it to qualia and I think Andy wanted to make sure he was not being misunderstood in an area that he may prefer to keep away from in print.
Jelle,
''(I guess I agree with Louis in the end that the only way to test for that is to have the machine telling you that it is conscious, and simply not taking no for an answer;-)''
You said it much better than I did. I did not say that but it is exactly what I was trying to say. Very good.
Thank you for all the suggestions. But I have to say that I don't agree with most of them. So let me ask one question.
Let's say we have a certain behavior (visual maze solving, laughing, telling us that its conscious, etc.) as a candidate criterion for machine consciousness.
Would you say that the criteria is still valid even if you may program a machine to do it, or a learning algorithm results in such behavior? If so, how can we tell whether its really conscious, or it was simply programmed/self-organized?
Mastaka,
Suppose an alien come here with an advance science and explain to you exactly how your body work and how you behave, would you conclude that you are not conscious based on that explanation? You are as much conscious after than you were before. I personally do not think that such explanation exist but this is beside the point. Now if I judge based on its behavior that a robot is conscious and you explain me how you built it, it does not change anything. Again I do not think it is possible to come up with such case.
Dear Louis,
I am not sure, but maybe you misunderstood me.
Lets say a certain behavior can be conducted by us and also by unconscious computers of today (e.g. face detection).
So obviously, being able to conduct this behavior *does not* lead to the logical conclusion that we are not conscious or vice versa.
It only means that this "certain behavior" cannot act as a divider between a conscious being and an unconscious thing. And to me, all of the suggestions so far claiming certain actions as criteria for machine consciousness fall into this category.
Many scientists/philosophers take the position that we cannot test consciousness objectively and I am simply one of them.
Masataka,
We from our young age have acted and interact with the world apparently within our consciousness. This only a living being can experience from within. Now we from early time have naturally such conscious agency power to other human being and animals. The only way to attribute such quality to any agent is to one we naturally have. It is not an objective critera but a natural one. I am the impression I control my arm because I can move it the way I want. How do you determine externally that I have this capacity? How do you come up with an objective critera that I can willfully move my arm? You cannot with a objective critera but you naturally can when you look at me. We can naturally ascribe without objective critera consciousness to certain agent. We do that all the time. It is not an hypothesis, it is my and your motus operandi. I am also one scientist/philosopher that say we cannot test conscousness objectively but we can naturally. I am one of those.
Dear Louis,
I believe that a device with consciousness is not going to be "natural" in the sense of our current life time and also, earth history. How can you assure that its going to be something that we can judge "naturally"?
BTW, how do you define "objectively" in relation to your "naturally"? You might have something interesting there.
Dear Masataka, I already proposed that the criterion for any system to be conscious is feeling. A system that feels the content of the information it possesses/processes is conscious, a system that does not feel it is not conscious. In this perspective, your question would be: how do we know from the behavior of a system if it feels or not? I already wrote about this question. Take a group of systems who feel (e.g. a group of human beings) and constitute a jury to evaluate the behavior of the candidate system. If it acts in such a way that only a system who feels can do (e.g. laughing at jokes, as proposed by Louis and Wilfried), then it is (fallibly) judged to be conscious. There is no infallible criterion, as there is no infallible criterion to judge a person who is accused of criminal acts.
Masataka,
''you assure that its going to be something that we can judge "naturally"?''
We are conscious animal that naturally, without knowing how, make these assesments on a daily basis. We even rank people based on that, we say this person is ''dull'' or that one ''lively'', this one ''inspiring'', or this one ''almost dead''. We can even judge artwork as being more or less lively. Some painting will render more life while another one without for us to know will not render the life. When we watch animation movies we can make judgement on these characters. When reading a book we can judge the character. We do not know how we do it so we cannot be sure if our judgment are true but it is the way we are. A test is objective as long as it is specified. A test is a machine. But it is not possible to even judge if objective test is testing what our natural judgement is testing. I do not thing that our natural judgement is using a mechanism. I think that our natural judgement is based on mirror neuron for self-enacting the agent and so use our own body nervous system to see the similarity with the agent.
Hi, Masataka,
and congrats for the great talk and this interesting discussion.
The presented ideas are crazy enough to be interesting – and made me to modify Clarke's second law: "The only way of discovering the limits of the possible is to venture a little way past them into the craziness." And it seems that you have ventured. (Next two sentences to be reed in basso profundo) "To boldly go where man has gone before - and got stucked. At the final frontier: COMPLEXITY."
Indeed, many philosophers, neuroscientist and software engineers stalled on the question of consciousness. Your approach is more pragmatic – is an engineer's modus operandi: instead of philosophical pensiveness, you have constructed (mentally? - it's not clear, if you have constructed the device or just envisaged it) a network which could be linked to a living brain. You have succeeded to build a scientific avanpost near the limits of human knowledge – at the edge of understandable complexity.
Now, back to your question: can be considered your device conscious – or you have constructed a philosofical zombie?
As I have stated elsewhere, https://www.researchgate.net/post/How_is_consciousness_generated_by_the_geometry_of_thinking IMHO your device is not conscious, nor a zombi.
The fact that you have constructed your network as supraiacent layers of signal-processing units, which are capable to integrate the observed input-elements, does not qualify the device as conscious.
To accept that, you should accept my craziness – which differs from yours.
A short and incomplete list of these differences
you speak about 2 streams of consciousness (further: C) in split brains. I think there are many streams of C – but at different levels. Louis describes himself as a zombi, when he walks in the street and thinking about abstract things. One can have a C which drives him in the street, an other stream of C meanwhile drives his thougths about a math-problem, which elicits another stream of physics, etc. We can speak about C when in this stream are present those elements which are linking the self to the environment in the past, present or future. E.g my laptop can factorize a large prime – but it will not understand how can this operation be used in the use of my credit-card.
You measure C by observing or interrogating the subject. I am a partisan of the insider approach. Those neuro-synaptic patterns has to be understanded, which are the elements of the brain's language.
You describe C with the terms of chaos theory. I see the role of this theory as an intermediate, because the main thing is beyond of the curtain of the final frontier – which is complexity. C is simply too complex for us to understand how it works.
I don't understand exactly your referral to the (apparent?) violation of the laws of physics. Which laws are violated and by what?
Kind regards,
Andras
Dear Alfredo,
Yes I agree that consciousness is something that has a feeling or "what it's like to be" as in Block's following argument,
"P-consciousness [phenomenal consciousness] is experience. P-conscious properties are experiential properties. P-conscious states are experiential, that is, a state is P-conscious if it has experiential properties. The totality of the experiential properties of a state are “what it is like” to have it. Moving from synonyms to examples, we have P-conscious states when we see, hear, smell, taste and have pains. (Block 1995)"
But I don't believe that we can rigorously test it by simply observing behavior from a third person's perspective.
Hence my proposal to connect the device to our brain and "see for ourselves".
I know its crazy and I know that its not going to be easy, but I don't see any other options.
I also know that we need to be careful in setting the criteria that the "experience" was actually generated within the device, not that sensory input from the device caused the "experience" to be generated in our biological brain. But I believe that my "look-up table" control (please see the above question for details) does take care of that, given that we have experience in the device visual field with the full device, but not with the look-up table.
Dear András
Thank you for the very interesting comment and also thank you for going through the rather long video. Let me study your new question and comments in detail, but since it seems to be a summary of your ideas, I hope that it will fold out during our discussions. So first, let me try to answer your question.
> I don't understand exactly your referral to the (apparent?) violation of the laws of physics. Which laws are violated and by what?
My "extreme red-pill" position is that even the intact (non-split) biological brain violates the law of physics in generating bilateral vision and solving related visual tasks.
If my assumptions on inter-hemispheric connectivity are valid, the two cortical hemispheres cannot and do not exchange all necessary information to construct our bilateral percept. As we know, our visual system is a dedicated system that the left hemisphere takes care of the right visual field, and the right takes care of the left.
And at least in low-mid level vision, there seems to be basically no space to take in and store fine-grain visual information from the ipso-lateral visual hemifield.
The point is that, in spite of this, we experience the two visual fields as a whole, and also may conduct visual tasks (e.g. symmetry detection) that require direct comparison of fine-grain visual information in the two hemifields.
In this sense, my "extreme red-pill" position claims that the intact brain violates physics; "We experience the two visual hemifields as a whole and may conduct visual tasks on it, with more precision than the Shannon information exchanged between the two cortical hemispheres"
Dear Louis,
"When we watch animation movies we can make judgement on these characters."
Isn't this a bad sign that your test is going to make a lot of false positives?
The mirror neurons are also known to make false positive responses. Under the current circumstances, my position is that we need a test that does not make false positive judgments.
BTW, I admit that my test would have "misses" for sure. Even if the connected device is "conscious", we might not experience anything due to insufficient or bad connectivity etc.
Masataka,
No it is not a bad sign. Don't you know that an artist can inject a certain type of life-like quality to objects. It is not a metaphor, it really correspond to something that is really perceived by people. Yes the object is totally dead as an object but it engage you into an experience that has some similarities with some aspect of real life-experience. The relation with artworks is like the observational relations when we do not interact but only observe. In the case of animation movies, the voices are the voices of real actors, the movements in provided by humans so that it has this similarities with real movement, the music is done so that it carries the emotional narrative of the action and so you are not observing a dead thing but an experience very similar to real living experience but with artificial surfaces.
Dear Masataka, our third person perspective observations are always interpreted from our first-person perspective. It is not necessary to connect brains with wires to make such an interpretation.
As I tried to argue for earlier: if you connect the device to your brain and 'see for yourself', you do not see (or you do not know for sure whether you see) 'the consciousness of the device': the most reasonable interpretation is that what you see is that YOU are consciousness (or at most YOU_as_extended_by_device). I cannot see how it is otherwise - and btw good to know now that Masataka is indeed talking about Block's P-consciousness which is exactly what I was talking about as well, so we're on the same turf. It is not a functional aspect I am talking about: if YOU feel conscious *through* the device you have no argument yet to claim that the *device* is conscious. (And this, in the end, bails down to the fact that 'devices' are never conscious because they are devices: artificial tools used by human beings, that can become part of human being, but not 'be' human being. This is the main category error of strong AI.)
Very reasonable, Jelle. But if this is phenomenal consciousness how can we say that 'you' have it and know what we mean be 'you'. As the pseudonymous correspondent Hyperaspistes said to Descartes the problem with cogito ergo sum is that we have no reason to have faith in our conception of 'I' or how may there are in our head. H pointed out, as Elizabeth Anscombe has revisited in the twentieth century that we do not know that a report of 'my' consciousness is not a chorus in unison of a dozen subjects of experience. And as an extension we really cannot tell whether or not there are several instances of phenomenal consciousness in a device, unless of course we take an epistemic operational definition of consciousness, which is more on the access side - and even then the device might report quite convincingly.
btw, about violating laws of physics: I think it is exactly laws of physics that get you the visual integration that you seek for. The two hemifields are not only connected internally, neurally, through the corpus collosum (or the other interhemispheric connections) but also through the physical body: we only have one body, and this makes for much of the connection already. To talk about what goes on in the visual cortices as purely perceptual input (a flow of 'data' coming in 'from the outside' in order to be processed into a 'percept') is misleading. the dynamics of the signals that pass through the sensory cortex are always coupled in time to the temporal dynamics, and in effect to the unity of, the bodies' own movement in space. It is these sensor-imotor couplings that the brain controls, and specific perceptual discrimination tasks that we set up in our laboratories may seem to be only about 'processing input' but in the end can only have meaning on the basis of these sensorimotor couplings already being in place (even if we fixate our subjects in the MRI tube - it is not dependent on the experimental setup but on the way our brain works as fundamentally being in the business of dealing with looped sensorimotor activity - which is strongly rooted in our physical body in space - rather than processing input to a percept).
Btw I still like your experiment but as said I think you are measuring something else than you argue for.
He he - sounds like a sound argument. Even though my *gut* tells me it is nonsense But don't listen to my gut, he is often unreasonable. Anyhow: the argument strikes both ways, and so mr Hyperaspistes (or mrs) causes even more trouble for the experimental design of mr Watanabe: It is not just that we cannot tell whether or not there are several instances of p-consciousness 'in me', that cause the experience that "I experience", and it is not just that there may be several instances of consciousness in the technological device we are testing: we can also not tell whether any of these p-consciousness instances is not in fact caused by, that is causally distributed over, several structural components at the same time, only one of them being the device (and the other for instance my brain, or any subpart of it). And so being able to create the experience (or observing the access-derivative of it in rats) is not going to prove anything about the 'consciousness of the device', or even of you'd want to call 'the device' conscious, you'd also have to concur that 'you are the device', or 'you and the device are one', since it is also 'you' that is conscious. (And if 'I' am not allowed to call myself conscious - then the logic of the experimental design fails as well because it was based on the idea that "I" could see for myself whether the device was conscious - so what kind of I would *that* be, then?)
"we need to be careful in setting the criteria that the "experience" was actually generated within the device, not that sensory input from the device caused the "experience" to be generated in our biological brain. "
Dear Masataka,
Do you see a difference between a device 'triggering' or 'inflicting' or 'spurring' or 'catalyzing' a conscious state (in a brain), versus the conscious state being 'generated inside the device'? I see big differences, and also many different words that all mean something else. I think one would need to make sure one knows exactly what is meant by 'generated in' as opposed to all the other options. It may be that look-up tables do not catalyze or trigger conscious states in the way that Devices do. It would say something about how Devices are able to *couple* to an embodied, living neural system and how this relates to consciousness - but it would not prove anything about machine consciousness. Sorry to have used up so much space - again good luck with your experiment!
Dear Jelle,
"Do you see a difference between a device 'triggering' or 'inflicting' or 'spurring' or 'catalyzing' a conscious state (in a brain), versus the conscious state being 'generated inside the device'?"
"It may be that look-up tables do not catalyze or trigger conscious states in the way that Devices do."
"but it would not prove anything about machine consciousness. "
Thanks for the elaborated comments!
So, I am a little confused now. You seem to appreciate my point on using "look-up tables" as a control condition, which would preserve everything on the biological cortical hemisphere side. Let's say that, with the full artificial cortical hemisphere, we experience ipsilateral vision (in regard to the biological hemisphere), but not with the look-up table. Would you still conclude that "it does not prove anything"?
I totally understand if people are suspicious that this would ever work, but if it does, I thought most people would agree that there would be a whole lot to gain. Next step would be to hack the full device and try to understand the nitty-gritty details on the actual mechanisms that led to the generation of subjective vision, by means of "analysis by synthesis". This is nearly impossible by conducting experiments solely on the biological brain.
Dear Louis,
"No it is not a bad sign. Don't you know that an artist can inject a certain type of life-like quality to objects. It is not a metaphor, it really correspond to something that is really perceived by people. Yes the object is totally dead as an object but it engage you into an experience that has some similarities with some aspect of real life-experience."
We seem to be on totally different pages.
My position would be that "something that is really perceived by people" has nothing to do with whether a certain object has consciousness or not. For example, regardless of how other people view me, as a conscious agent or a zombie, I know that I am conscious and this is all that counts, as in Descartes's "I think, therefore I am".
So what would be your definition of machine consciousness?
Dear Masataka,
If you could prove that the laws of physics are violated in intact brain (and in your experimental setting with a hemisphere connected to your device) during the mental construction of the whole visual field, you might easily find next time when you enter your lab two distinguished gentlemen waiting for you. They will ask you in behalf of the Royal Swedish Academy for Sciences to let them have a look over your shoulder how the laws of physics are violated – because such a demonstration would represent an outstanding contribution for mankind. But before they will advise you to prepare your tuxedo for the ceremony, they will check up if there are other possible explanations for the claimed vilations of the known laws of physics. And if other explanations are possible - like –
1. uncharted neuroanatomic links between the two hemispheres
2. teleportation (of electrons or ions)
3. a neuronal link through phonons
, which could explain the seemingly unexplicable information transfer, than the gentlemen will have to transfer back to Stockholm airport reporting mission failure.
And a few words about your debate on measuring P-consciousness – or, at least to identify its presence in a machine.
Most of you are trying to determine the presence of C by examining/interrogating/surveiling the behaviour of the machine/device/whatever, and compare the reactions of the machine with your reaction in similar conditions – or make an extrapolation on this basis. This is a subjective approach of an outstander’s view. It has ancient roots and vast bibliography – but even so probably will not be able to correctly determine the presence of consciousness in strong AI. For that purpose an objective approach is needed, based on an insider’s view.
Kind regards,
András
Dear Masataka,
In what way could the look-up table be anything else than the device, if it manages to preserve all brain-machine interactions? Isn't a device defined by it's interactions with its environment?
Dear András,
Yes, I completely agree with you that proving that "laws of physics are violated in intact brain" is not easy, even if my proposed "realistic" experiment (53:00 https://archive.org/details/Redwood_Center_2014_04_30_Masataka_Watanabe) works with countable number of artificial inter-hemispheric connectivity.
The problem is, like you say, the two cortical hemispheres might be talking to each other using very complex neural codes.
I even have an intermediate "blue-red pill" position in regard to my "Chaotic Fluctuation Hypothesis", where I suppose that the two cortical hemispheres are in sync in terms of phase-coherent or non-phase-coherent oscillations and the seemingly "hemispherically private" properties (fine-grain position information) are actually communicated using complex time-expanded codes.
So yes, it is a challenge, but as a first step, I will be happy to simply demonstrate that the neural codes that we know of today (e.g. rate codes, temporal codes) cannot explain inter-hemispheric communication and the resulting bilateral percept.
Dear Jelle,
"In what way could the look-up table be anything else than the device, if it manages to preserve all brain-machine interactions? Isn't a device defined by it's interactions with its environment?"
A more "down to earth" thought experiment will be to think of a biological split-brain. So, instead of a full artificial cortical hemisphere (that generates qualia!), I will use a biological cortical hemisphere to make it more intuitively comprehensible. It is probably much easier to "swallow" the logic, because you may take for granted that "consciousness" is not solely "defined by it's interactions with its environment" in case of the biological brain. Here, the look-up table will mimick the outputs of this biological hemisphere.
As in the results of Sperry and colleagues,we first assume that the two segragated cortical hemispheres seat two independent streams of consciousness. Next, we further assume that we succeeded in rewiring the two cortical hemispheres (A and B), and the split-brain patient regains bilateral percept.
The question is what happens when we replace one of the cortical hemispheres, say B, with a look-up table. The "look-up table"simulates interhemispheric inputs to cortical hemisphere A by taking in interhemispheric outputs from it, together with the index number of presented stimulus.
Would the patient (cortical hemisphere A) regain bilateral percept with a look-up table?
The neural activity state of the biological cortical hemisphere A would remain identical, but there is no visual processing what so ever that corresponds to the ipsilateral visual hemifield, biological or non-biological.
I think it would be very less likely that the patient regains bilateral percept under this condition. If the patient experiences the "look-up table visual hemifield", that would mean that the ipsilateral neural representation of the biological hemisphere A is only needed to generate it, nothing else.
So going back to your original question,
"In what way could the look-up table be anything else than the device?"
my point would be that, a full visual device and a look-up table can be very different, just like how the biological hemisphere and its look-up table can be very different in the above example. Again, borrowing your words, the key is that consciousness is not solely "defined by it's interactions with its environment".