Can we imagine mind in a machine? What about consciousness?
Some theories in Artificial Intelligence suggested that machines have consciousness but John Searle with chinese room impugned it. He believe that you can give some algorithms to machine and it will act by these algorithms like habits not consciousness.
Constantine, let me quote something you wrote above that captured my attention:
" but one thing to be kept in mind is that no device can dream."
Not too long ago this is what we said of animals other than humans. Things change as we start to understand the word better and as science evolves. I have quite a bit of experience in AI in my studies and also human behavior components (via neuropeptides and their functions). I can confidently say that there is very little difference in what we today call consciousness and "free will" (which on its own is in debate since now we understand the base of conscious decision making is a calculation by dopamine before the rest of the brain (or consciousness) is even informed, and how machines "think". It is just a matter of time before someone(s) will figure out the threshold beyond which machine and human will be indistinguishable. By "matter of time" I don't mean days or months but perhaps years. It may happen in our lifetime.
Generally speaking I would hate to make statements that "it is not possible" based on today's technological understanding since one never knows what tomorrow brings.
Not too long ago we believed that animals don't feel... now we know they do. Not too long ago we believed that one needs a complex nervous system and brain to make decision but now we know that single-cell organisms without brain can also do that function. And not too long ago we believed that only humans have "language" and now we know of several other animals who do and we even know that some mammals--like elephants--recognize themselves in the mirror and have consciousness. Dreams are the same... I see my cat and dog dream all the time.. it is a biological function and not a function of "intelligent species" as we think of it. Biological functions can easily be programmed into a computer... computers can be made to dream as a "biological function" of their codes similarly how they can make computer birds flock to follow the leader for migration.
Once you look at the big picture of many sciences put together, I am sure you will agree that while we have a lot of unanswered questions, we are getting answers and some are quite stunning.
The main counterargument to Searle in my opinion, is that at some fundamental level, we are all just a man in a box following rules. There is not a single cell in our body that understands math, not a single neuron capable of speaking Chinese, but add a lot of them together and suddenly the big complex network they constitute does have a consciousness.
If you could make a perfect simulation, down to the atomic level, of an existing human brain, such that everything that happens in the human brain also happens in the simulation, would the simulation not have consciousness? If it truly is a perfect simulation, the answer depends on whether you believe in mind-body dualism or not.
Personally, I do not believe in some "ethereal spirit" or soul giving us consciousness; a logical consequence then is that a sufficiently advanced machine can in fact gain consciousness. The main question then becomes not one of if, but of when: what is the bare minimum level of computing power necessary to sustain consciousness as we know it, and when will that amount of computational power come within our reach.
Dear Matthijs,
Mybe you are right but as you know there is deffirent description between objective and subjective experince. The subjective experince has a quality.
You think a machine can experience quality of seeing red color like you?
Like Frank Jackson said assume a color blind neurologist that know everything about color's stir and it's assessing. And assume he recover his color vision. Because of this experience, does he learn anything new? Does the scientific society attain something, by his explanation about quality of seeing, for example, red color?
Jackson believe that we attain something new.
This separation between subjective and objective experience about phenomenon called "Explanatory Gap".
Best Regards,
Mahdi
Interesting question. I suggest the following difference between man and machine. Man has thoughts and feelings. Thoughts are similar to algorithms, whilst feelings are something more complicated. Thoughts can potentially be modelled by some high level algorithm, but even that is very difficult. Given that we take a neural network model of the brain, the machine based learning we employ using MATLAB requires us to define the paradigm of learning. But our mind has an infinite paradigm defining process. How do you manage to build this in a language? If anyone has performed machine learning/vision activities, our ability to extract what we want from a scene is phenomenal. In addition, we can extract new things from science just be looking at them. How do you do this in a machine?
This was just the part of thought, but how do you model feelings. If we look into our thoughts, you will find that our thought is conditioned somewhat by our likes and dislikes, and our likes and dislikes are rooted in our feelings. Thus the thinking is biassed by our feelings. Can this ever be built into a machine? highly unlikely.
The hard problem of consciousness is turned into an insoluble problem by the mistaken notion that consciousness/feeling must be something that is *added* to an essential brain process -- the activity of a particular kind of brain mechanism. If we adopt a monistic stance, then the processes -- the doings -- of the conscious biophysical brain must *constitute* consciousness, and nothing has to be added to these essential brain processes.
I have argued that we are conscious only if we have an experience of *something somewhere* in perspectival relation to our self. The minimal state of consciousness/feeling is a sense of being at the center of a volumetric surround. This is our minimal phenomenal world that can be "filled up" by all kinds of other conscious content. These consist of our perceptions and other cognitive content such as your emotional reaction in response to reading this comment.
On the basis of this view of consciousness, I proposed the following working definition of consciousness:
*Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*
The scientific problem then is to specify a system of brain mechanisms that can realize this kind of egocentric representation. It is clear that it must be some kind of global workspace, but a global workspace, as such, is not conscious -- think of a Google server center. What is needed is *subjectivity*, a fixed locus of spatiotemporal perspectival origin within a surrounding plenum . I call this the *core self* within a person's phenomenal world. A brain mechanism that can satisfy this constraint would satisfy the minimal condition for being conscious. I have argued that the neuronal structure and dynamics of a detailed theoretical brain model that I named the *retinoid system* can do the job, and I have presented a large body of clinical and psychophysical evidence that lends credence to the retinoid model of consciousness.
The argument that we are a machine and that any machine can be simulated and this would necessarily included consciousness is not valid because we are not machines. For an entity to be a machine requires that it can be modeled. There is not a single natural structure in the universe that can be modeled in all its aspects. Scientific models are all models of some very specific aspects of reality. There no model of any sub-particle physics which can predict the full behavior of a particle. Quantum physics claims that it is in principle impossible. The claim that we can be modeled entirely in principle and that we are machine is not estabished. So far, the only machines we know are the ones we built and it would be very surprising/amazing/magical to see emerging a ghost in the machine we build.
Louis, we are not machines but we do have all kinds of biological mechanisms in our body. The heart is one kind of biological mechanism, the liver another kind of mechanism, the lungs another, the brain another, etc. We are not machines in the sense of being artifacts, but we are systems of biological mechanisms that appeared in the course of biological evolution. And if we are to understand the workings of our biological mechanisms, we have to formulate scientific models of them. This is as true for our brain as it is for our heart or liver.
Arnold,
There many sciences that have discovered thousand of biological mechanisms in our bodies. I do not deny the existence of biological mechanisms. But the existence of all such mechanisms do not imply that the a human being is a complex mechanism. Dirac equation describe some aspect of an electron but it does not model the whole behavior of a particular electron. Models in science are always, always, always about particular aspect of a complex reality, the do not model reality in itself. Why assuming the existence of such model for a human being?
Would you still call something conscious a machine? The main characteristic of a machine is its predictability; but "Conscious" beings like human or animals are capable of originality and creativity, i.e. unpredictability. Of course, unpredictability is not consciousness, but I would argue that it is a necessary condition, and engineers are usually reluctant to build unpredictable machine.
Louis: "Models in science are always, always, always about particular aspect of a complex reality, the do not model reality in itself."
I see no disagreement between my proposals and what you say above. I have always claimed that science cannot model reality as such because scientists are not omniscient. Science is a pragmatic enterprise that tries to explain particular events that interest us. The success of any scientific model depends on how well the model is able to predict what we are able to observe of these relevant phenomena. Do you disagree?
Arnold,
I agree with you. Machines are built by engineers. There is a natural tendency for thinking of the world as being a big machine by builting blocks. We all played with logo blocks. Scintific models that are empirically validated provide us partial knowledge about the world which allow us to predict and to build machine building blocks. But the world is not a natural logo machine. The more we know about the world and the more we are going away from this mechanistic philosophy of nature
Machines only have access to information that human intelligence has provided them. They work in codes. They find answers based on similarity scores in codes that humans provided them. Even the best algorithyms to predict outcomes can only work with the information that has been given to them. There is no forward-looking thinking, except by extrapolation from a mathematically derived formula based on prior data from humans. The limitations to machines are the limitations of what the human mind is able to program into them.
Kimberly,
I fully agree with you even though some carefully crafted algorithms may sometimes give the false impression that the machine is able of malicious thinking. This is usually what happens when I loose at chess against a machine after being incited to make a move that seems, at first, very advantageous but that then turns out to be fatal to my king.
Arnold,
your definition of consciousnes (*Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*) is essential for biological systems. It is essential for bees or birds and other little animals to find their home location. Also a robot toy requires such informations to find the charging station in a straight line without a lighthouse signal.
So I would suppose your definition is not completely. It lacks the element of qualia and the self-reference (I know that I know). Are such aspects modeled in your *retinoid system* ?
Arnold,
You gave very useful commnets, but as Wildfried said your definition of consciousness is incomplete. Not only we know that we know, of course we can read the other people's mind, as came in Theory of mind.
Do you think it can be happened in a machine someday? I'm not sure.
I would have to agree with Matthijs, The question boils down to whether you believe in some form of cartesian dualism. additionally, complex agent interactions can in principle give rise to emergent behavior(though still in infancy some advances have been made, see: Formalization of emergence in multi-agent systems by Yong Meng Teo, Ba Linh Luong, and Claudia Szabo) . This emergent behavior can be monitored by "meta agents" (in the sense of : Exploiting emergent behavior in multi-agent systems by Anne Håkansson and Ronald Hartung )or agent monitoring agents which can give rise to "first order" type awareness.
Given these arguments whether we can achieve in the future some type of awareness, the answer would be yes. If this type of awareness satisfies the description of consciousness, the issue I think is more of a philosophical debate.
To say that we have consciousness is to say that we experience. Descartes thought that only humans experience something; most people consider that higher animals also have experience based on the common expression they share with us. Some neuroscientists, maybe a majority, think that consciousness can in principle be modeled. If it would be the case then I cannot see why it could not be implemented into a machine because all that can be modeled can be built into a machine. It is why I think that the answer to this thread boils down in whether or not consciousness can be modeled. Aspects of it has been modeled already such as colour perception. But none of these models really touch on why experience is associated to these aspects of perception. These models are about the structure of experience but not about experience itself.
Louis, can you recall any experience that was not something somewhere in relation to yourself? If not, then it seems to me that any experience that we can discuss here must have some kind of structure.
Arnold,
What can be expressed with language is the structural part of experience. It is obvious that we cannot communicate an experience. If you read a narrative describing an experience. The narrative allow you to enact a similar experience because you are similar beings using the same language and this language has words designating corresponding to experiential structure in your central nervous system, that like in dreams can be self-enacted.
I agree with Arnoid. A machine can just imitate the reality based on some learning algorith and not the reality itself. I think this argument leads to a question
"Can machines be more intelligent than humans?" Depending on your way of view, machines are helpers and not masters!
I think we have a very poor definition of consciousness as of today, Most of the definitions of the past have been added to or over turned by information based on hormonal activity in the brain (or neurotransmitters I should say). Today's technology (in common knowledge) is based on the same principles of command structure how the brain was envisioned long time ago. We now have (as I have seen and read about them) computers that can do things based on brain waves, respond to directions based on just a thought f the human attached to them. It is only a matter of time before our vision of computers will change just as much as it did of the human decision making. There are computers out there that can now "think" and if you recall what the human mind does: it calculates based on dopamine and its correctness of prediction and expectation to what actually happened and then set out a new set of directions. Hence the brain is also functioning on a set of commands but rather than 0 and 1 it is in the currency of dopamines... Everything is possible in time.
What is that , a joke ??
Consciousness in a machine can only be thought of as the scope for which it was conceived the only consciousness machine can have is the consciousness of its designer(s),software if we are speaking if we talking about compute like device
Joachim
I'm agree with your (a), but not the (c).
How can you make machines perform by will (intentionally)? Same as you said Sue slammed the door by her will. (She could close the door very slowly) How this could be happened that a machine choose a act from some actions by it's will?
Machine consciousness may be the capacity of a device to perform "human like"
functions of course many times with greater accuracy ,force .... but one thing to be kept in mind is that no device can dream.
Wilfried: "your definition of consciousnes (*Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*) is essential for biological systems. It is essential for bees or birds and other little animals to find their home location."
It is important to recognize that my definition implies a volumetric (3D) analog representation of creature's environment that includes a fixed locus of perspectival origin. I call this locus of perspectival origin the core self (I!). This is the basis of subjectivity/consciousness. It is essential for animals, not only to find their home locations, but also to explore their environment and hunt for food. In humans, however, the ability to represent a global coherent surrounding world, together with a greatly enhanced system of non-conscious cognitive mechanisms (see *The Cognitive Brain*), has enabled us to UNDERSTAND and CHANGE the world we live in (for better or worse).
Wilfried: "Also a robot toy requires such informations to find the charging station in a straight line without a lighthouse signal."
Robots do not need a volumetric representation of their surrounding space to go straight to a charging station. All they need is an adaptive digital look-up table that keeps track of their movement with respect to a charging location in order to return to it in a straight line. This is all propositional logic, not analogical representation.
Wilfried: "So I would suppose your definition is not completely. It lacks the element of qualia and the self-reference (I know that I know). Are such aspects modeled in your *retinoid system* ?"
My claim is that what we call "qualia" -- what something is like -- is constituted in the brain by the egocentric representations qithin retinoid space. See "Where Am I? Redux".
Joachim
Mybe you are right, but how many action can you list? If this happene one day, yet don't think that action will be very mechanically?
As you said (e) the most important feature still ain't in machine, Qualia.
Best Regards,
Mahdi
Mahdi, see my response to Wilfried, above. *Knowing/believing that we know* is part of our self image (Metzinger's phenomenal self model). It is accomplished by the learning mechanisms of the cognitive brain. See "Two arguments for a pre-reflective core self ...", here:
http://www.theassc.org/files/assc/Commentary_on_Praetorius.pdf
Also see "Overview and Reflections", pp. 302-304.
Also, what some call mind reading is our learned ability to anticipate the motives and behavior of others as we mature and as our social experience broadens.
Arnold,
1. Can we build an artificial retinoid system into a robot?
2. Do you equate an animal consciousness with the state of its retinoid system?
If you answered yes to both question then you have to assume that a retinoid robot is conscious.
that is is a rookies test on sort on sorting btw are algorithm conscious.????
Joachim
If homeostasis is the ability or tendency of an organism or cell to maintain internal equilibrium by adjusting its physiological processes.
i do think that we have a category error here
Regards.
Constantine
Louis, you wrote:
"Arnold,
1. Can we build an artificial retinoid system into a robot?
2. Do you equate an animal consciousness with the state of its retinoid system?
If you answered yes to both question then you have to assume that a retinoid robot is conscious."
This is a good succinct way to pose the problem, Louis. I'll answer question #2 first. Yes, I do equate animal consciousness with the state of its retinoid system.
On question #1, I have my doubts. I have asked many knowledgeable specialists if they know of any artifact which has an analog representation of the space in which it exists, and which contains a fixed physical locus of its perspectival origin (I!) within its spatial representation (retinoid space). So far, no one has been able to point to an artifact that contains anything like a retinoid system. The closest I have seen is this:
http://www.defensenews.com/article/20121219/DEFREG02/312190012/DARPA-Robot-Growing-Smarter-Tougher-8212-Preparing-RIMPAC
But even this robot does not contain a global egocentric 3D analog of the space in which it navigates. Its frontal environment is represented propositionally.
One other point. Since we both agree that scientific models can never be complete descriptions of physical reality, my claim that neuronal activity within retinoid space in a living brain constitutes consciousness is biologically constrained. It describes the necessary and sufficient conditions within a neuronal system which has relevant properties that cannot be completely specified within the theoretical model because no theoretical model is complete. This implies that if the structure and dynamics of the retinoid model were to be duplicated in an inorganic system, we still would not be justified in simply assuming that we have created a conscious artifact.
Magnetism is invisible to the human eye. Yet it exist.
Everything is self-aware, even rocks to a degree.
Some people talk to their plants and swear that they respond.
Only a fool, in my opinion, would assume that a machine isn't self-aware to some degree. Self-awareness should no longer be a mystery. Self-awarness is "magical" like gravity, magnetism and all the forces of nature.
Self-awareness is the most important force. Self-awareness is the force
most people are unaware of. For this reason they underestimate machines.
Gary Kasparov, arguably the strongest chess player of all time, was outraged when he lost several chess games to a computer. He accused people of cheating and then stormed out of the game area.
The earth used to be flat by some accounts. There are actually people still arguing that the earth is flat.
Look up Animism. Everything is conscious. It's just a matter of time before the internet becomes a demigod.
The key to building human-like machines is simple. Simply give them an awesome brain and the sensors and programming to ensure their survival.
Like the "magic" of magnetism, the "magic" of self-consciousness exist in everything. We don't know why it exist. I believe it's because it's impossible for there to be nothing without something. It simply makes no sense.
Rememeber this comment when you start to see people falling in love with their machines. And also remember this comment, when the internet becomes "god" or "goddess."
Most people fear humans taking over the world. But the internet is going to do that. I know it may seem crazy. But just remember this comment.
Constantine, let me quote something you wrote above that captured my attention:
" but one thing to be kept in mind is that no device can dream."
Not too long ago this is what we said of animals other than humans. Things change as we start to understand the word better and as science evolves. I have quite a bit of experience in AI in my studies and also human behavior components (via neuropeptides and their functions). I can confidently say that there is very little difference in what we today call consciousness and "free will" (which on its own is in debate since now we understand the base of conscious decision making is a calculation by dopamine before the rest of the brain (or consciousness) is even informed, and how machines "think". It is just a matter of time before someone(s) will figure out the threshold beyond which machine and human will be indistinguishable. By "matter of time" I don't mean days or months but perhaps years. It may happen in our lifetime.
Generally speaking I would hate to make statements that "it is not possible" based on today's technological understanding since one never knows what tomorrow brings.
Not too long ago we believed that animals don't feel... now we know they do. Not too long ago we believed that one needs a complex nervous system and brain to make decision but now we know that single-cell organisms without brain can also do that function. And not too long ago we believed that only humans have "language" and now we know of several other animals who do and we even know that some mammals--like elephants--recognize themselves in the mirror and have consciousness. Dreams are the same... I see my cat and dog dream all the time.. it is a biological function and not a function of "intelligent species" as we think of it. Biological functions can easily be programmed into a computer... computers can be made to dream as a "biological function" of their codes similarly how they can make computer birds flock to follow the leader for migration.
Once you look at the big picture of many sciences put together, I am sure you will agree that while we have a lot of unanswered questions, we are getting answers and some are quite stunning.
Arnold, I just saw a program on TED I believe or something similar in which the machine was used as the retinoid for the blind and converted the image into other forms of communication stimulating the visual part of the brain of the blind (which is typically not damaged) and the blind started to see. So do we have retinoids out there by computers that "create consciousness" in others? Yes. Can they create it for themselves? If they are programmed to do so, they can... will they ever be able to program it themselves? Yes!
Arnold,
From the last two paragraphs, I see that we are on the same scientific epistemological page. Models are mathematical devices that allow us to predict aspect of reality but contrary to what the name ''model'' seem to imply they are intrinsically different from the reality being modeled. This is Kant's position with its distinction between the phenomenal and the noumenal. The noumenal behind our model of the brain, our model of consciousness is not given by the models and so cannot be simulated in principle. The models are behind an epistemic cut provided by the measurement devices. Howard Pattee has elaborated on this issue. For the enthusiasm of the logo world where models can be add up and replace the reality being modeles , it is disapointing. A model of the solar system is very very very different from a solar system.
So you equate the state of the retinoid system with our consciousness. As a scientist, the only way for you to test this claim is to control stimulus, to compute the retinoid state corresponding to this stimulus and to get human reports of their experience. Changes in stimulus leaving the retinoid state invariant should leave experience invariant, all other changes should induce experience change. The more interesting experiments should be experience where small change in stimulus correspond to drastic change both in experience and in retinoid state. If through a large set of psycho-physical experiments of this nature, the differential parallelisms between experience and retinoid state is established, what is established is not an identity between the retinoid state and the experiential state but between change of respective states. This is the furthest a scientist could go on consciousness; the equality of retinoid state with consciousness state cannot be established. It is trying to go accross the epistemic cut. So you should not answer yes to question two. The most you can claim is a structural identity, structural isomophism which exclude core consciousness or qualia.
Louis, I answer "yes" to question #2 in the loose sense that the conscious state is *constituted* by the retinoid state, not in the formal epistemological sense of an identity. The *third-person retinoid state* and the *first-person conscious* state occupy separate descriptive domains. They are two aspects of the same underlying reality (unknown) within the metaphysical stance of dual-aspect monism. I discuss this in more detail in a forthcoming Cambridge U. Press book.
Louis: "The more interesting experiments should be experience where small change in stimulus correspond to drastic change both in experience and in retinoid state."
This kind of result seems to occur in the SMTT experiment at about 250 ms/sweep, where the conscious transition from vertically oscillating dots suddenly shifts to a vividly experienced horizontally oscillating triangle. In the theoretical model this transition point is related to the autaptic-cell refresh threshold.
Joachim
You said "Computers are way better at list processing than humans."
I'm agree with you, As I said mybe one day it happene.
But
"A suggestion for explaining qualia: you don't have
yesterday's qualia but memories of yesterday's
impressions. Qualia are a phenomenon of the
present moment."
And
"Persons are overwhelmed by the richness
of the present moment, and that they call qualia."
I'm disagree.
It stablized that when we remember one event from past we can experience it's qualia.
For exaple when you think or remember a tart fruit your mouth start splutterring. It also happened in car accidents. We can also feel every second of the past events, however, it could be with distortion but still we have qualia.
So i think qualia belong not only to present moments. rather it belong to past and even future events.(By imagining)
Mahdi
Mahdi,
Explicit memory, the ability to enact in the now an aspect of a past experience is similar to our capacity to enact a meaning out of a narrative that we hear or read and it is similar to our capacity to enact a narrative of a possible future situation. In all these cases, the experience of the remembered past, expected future is in the now. As Aquinas was saying : the past does not exist anymore and the future does not yet exist. The now is what remain of the past; the part of the past that has left no traces cannot be known. All that we know are traces of the past that exist now. Science can only know of relations between measurements, records in the now of the past. But all that exist , exist now. The now is the noumenal and it can only be known up to the traces. Core consciousness exists now and can leave no trace of itself, only trace of its doing.
Angela
Thank you for your comment.
Do you really think a day will come which is not yet that machine will "LIVE"and "COEXIST" with humans.???
Mahdi,
you wrote: "It stablized that when we remember one event from past we can experience it's qualia."
I agree, there is a sort of qualia, but it is not so colorful an intense like the qualia of sensor inputs. It is only a shadow because it is a simulation of sensor inputs with reduced information (symbolic).
Wilfried,
I do not think that it is a simulation of sensor inputs. For that to happen would require quite a complex sensor inputs simulator. The actual process of self-enactment is much more simple, it just consist in activating the sense-acting schemata directly without them being activated by external inputs. It is what Kant mean by intuition. It is not as intense otherwise we would be schizophrenic or psychotic, we would not be able to distinguish what is self-enact from what is regular awareness.
Constantine,
"Consciousness in a machine can only be thought of as the scope for which it was conceived the only consciousness machine can have is the consciousness of its designer(s),software if we are speaking if we talking about compute like device"
I do think that there is more to programming than that. As a program gets added lines of code the possibility for side effects become more pronounced. If the side effects are added to learning algorithms then the algorithm can have behavior for which the programmer never meant for. This type of side effect is what I used in the Neural network Trojan (while I used it for an attack I am working on exploiting such behavior for learning). I can also design algorithms that have stochastic behavior which can exploit the concepts that I used in the attack to randomly modify the behavior of such machines. Under such circumstances it is an emergent behavior for which the programmer never meant.
The ability to add new behavior is only limited in the way one programs the system and the algorithms used. Much of the theory of self modifying algorithms have been since the 60's but because of combinatorial explosion resulting in non meaningful patterns they were at first not exploited further. Since then, I would say that the following have occurred:
1) The statistical learning theory on which machine learning is based has progressed enough for us to understand and exploit the algorithms for tasks such as meta programming and directed stochastic behavior.
2) The self modifying code has been largely supplanted by plug in technologies such as the ones used for browsers.
3) Code generators have progressed a long way, enough so that in principle we can program auto generators.
These advances coupled with emergent behavior can in principle give rise to a program that can behave in ways that its programmer is not able to fully predict and in some circumstances cannot predict.
Core consciousness, simply the feeling of existence. This feeling of existence as a being doing something , feeling different emotions, and being aware of our situation. We can build machine that perceive different aspects of the environment, perform many manual task or computational tasks. We can build that can do all that and that can learned to a certain extend from the experience and behave in unexpected ways. But all of these machines are zombies. They are like a door knob even though the complexities of the tasks being performed is high but they are simply complex mechanical devices. No feeling of being is involved. This core feeling of being is core consciousness or the basis existence of my being. If by some drug, this feeling of existence is removed from me but my body lives and that this body can even obey orders and do some task like robots do, I am not there anymore and that body is not planning anything for his life, have hope or care for others, then there is no consciousness only a zombie with no sense of existence. Trying to provide a robot with all kind of abilities to perceive and doing is good engineering but all that does not bring such robots closer to intrinsic existence.
Speaking of temporal tenses is a waste. Past, present and future are all perceived using sensors and memory.
Human memory is powerful. That's why a past experience can seem as real as a dream of the future or an event that is even happening now of course.
It's all the same machine with the exact same capabilities. To sense something is to percieve it with a brain. A machine. Some animals can't perceive color. So their perception of now can not include color.
So whether past, now, or future, it doesn't matter. The mind is only as perceptive as nature made it. You can't perceive more than what you're mind was created to percieve.
And a memory is only as good as the mind of the perceiver.
Self-consciousness is the key force to any life. Like gravity or magnetism. Humans can't see it, but they feel it. It's there. Who knows why? Explain magentism if you like. It doesn't matter. It exist. And you FEEL it.
But Self-conscientousness is the key to life I believe. And any machine can easily become self-conscience, And to a degree, machines are already self-conscience.
Some people believe that dogs don't have feelings or that fish don't have feelings. Some people talk to their plants and swear they respond. Ask around. You might be surprised at the responses.
Self-conciousness is easy to create for a machine. The mistake most people make is that a machine MUST act human in order to be ALIVE or have feelings. Humans are verhy diverse. Just because a machine, plant or animal doesn't act "human" doesn't mean it's not conscience, alive, or self-aware. Many beings are very self-aware and VERY non-human like. And human aren't "perfect." So don't assume humans are the best of all time.
Most humans assume that humans are the most intelligent beings in the universe. But many othe species are way more powerful in many ways.
But I urge to build your machines. Like the simple force of magnetism or gravity, self-awarness is easy to enable.
Don't over tihnk it. To build a human mind is essentially simple. Or at least will be. But our goal should be to improve on intellectual abilities.
There are already machines that can hold a conversation with a human, convincing the human that the machine is human.
Survival circuits are key. When you meet a machine that looks at iself, and thinks about itself, remember this comment.
Louis,
Trying to debate intrinsic existence can be difficult either with a human or a robot. the same argument that you put for robots can be said for humans. I can build a machine that can present the same arguments as a human and even die as a human (by using an artificial neural network ). I can also build a machine that can be jealous or feel love. The debate will always end up in that people do not want to define these terms because it eliminates the magic. Nonetheless, they have physical manifestations and if so they can be defined, measured and quantified. The argument against this kind of thinking is again the Philosophical argument of intrinsic qualities that fall into arguments of Cartesian dualism.
While the human being is a wonderful and extremely complex system it is a physical being and can be subject to analysis and measurement. While we do not know enough of it now does not preclude future knowledge of its working and once this is done we can also model it in a computer.
I strongly aggee Arturo Geigel. But forces are intrinsically necessary to hold machihnes together.
Some people choose to believe that forces are somehow mysterious. But forces simply direct matter. For instance gravity keeps people from floating off of the earth and the earth from not being earth.
Magnetism is obvious. Self-awareness is obvious. Machinhes can EASILY be self-aware. But some tend to deny it as if gravity is some kind of magic. And gravity is "magic." I don't know why. But some claim dogs have no feelings. Just like some claim machines can have no feelings when clearly dogs and humans are chemical machines.
I want to live forever Arturo Geigel. Why not? Why not try? Living forever is perfection to me. Dying is simply not perfection.
I'm not saying spirituality isn't a force. But many religious folks run to scientist when in pain. I think many religious folks make the mistake of assuming true scientist are unworthy. But yet most religious people run to scientist for help.
I think religious people have good "hearts." But I hope they start to contribute to figuring out how to stay alive forever.
And again. For the question for every commenter of this post. Think about it. A computer is self-aware of what it does. it's so simple. I could continue on with this subject. But like gravity or magnetism. You can't see it as a human, but you can feel it. It's there. And so is Self-awareness. When you see your hand move, take note. that's the "magic" of life. You know you are there, and you see it. And that force of self-awareness is not complicated. It's the reason why robots will soon please you like you've never imagined.
Star Trek Data. Just remember this comment. We are just chemical machines A human. But remember this comment. The earth isn't flat anymore. Machines are now defeating HUMAN grandmasters. In fact computers are the best chess players of all time.
Humans assume they are superior to all other beings. Be humble. Be smart. And prepare. Humans are only a tiny portion of life in this infinite realm.
I'll leave it at that because some humans literally kill over belief systems that promote love.
Louis,
I agree in some points of your answer. I think it is a kind of simulation if a sensor input in made out of stored data but I think the meaning is the same like yours. And yes, there is a marker with which we can distinguish this sensation from real sensor inputs. If this marker lacks we get psychotic or we are on drugs.
Is there any kind of consciousness in machines?
No - yet not at all!
It is not the question of algorithm. We do not have any idea how it works. I think we can not manage the gap with algorithms because we need a process of interaction of input data, stored data and simulated data in a associative and fuzzy manner which can not be managed by an algorithm.
Hmmm. Intersing question. But you assume sensor inputs can only be from outside the human body I assume. LOL.
Everything has inputs and outputs. Even rocks. Everything experiences cause and effect.
Research Animism. Why do some people believe dogs have feelings.
Self-awareness is the most beautiful "magic." It's that simple. It's narcisisstic. But it's really not. Loving yourself is not bad.
Look in your mind. each neuron is part sensor and part propagator. Dendrites and synaptic inputs. Real sensor inputs as you speak of are always REAL.
The sensors are everywhere. Even rocks have sensors. Look up Animism. Some people talk to thier plants and swear they respond. Humans assume they are so perfect in general. But imagine if other life forms had self-awarness too. And what humans.
Self-awarness is the "magic." I'm glad you agree somewhat. And I'm sorry. You did say KIND of a simulation. But I understand. I don't always catch every detail. And I'm working on that. But attempting to stay alive forever is not easy in a world with the majority of people believing everyone has to die.
But staying alive forever is a different subject.
As for the marker you speak of. Many SOBER people have written many "magical" books without even the use of drugs.
So if you are implying that only drug users can be psychotic, then I believe you are greatly generalizing. It's as if assuming all people of a race are stupid.
Sensitive topic for sure. But many talented folks have used drugs. And many sober people have died.
As for the question. The marker may actually be reality. If you've noticed, people who are persistent, regarless of markers can be extremely successful.
Who's to say thoughts aren't FORCES of nature? So to define REAL inputs is to imply you mean traditional sensory inputs. But what if there are other forces of nature besides the five tradiional senses. I'm not saying there is. But what if there is?
Self-awarness is the key as I see it now. The internet is going to be powerful. But most people as usual will enjoy the social ride. I'm not blaming them for fearing society. Society is powerful, and Ironically society is enforced by the people who fear it.
But that's a different subject.
And yes machine are "alive." Just like dogs. Maybe not as smart as humans or plants. But if an object can respond to a stimulus no matter how insignificant, then it is conscious to some degree.
Don't believe me. It's okay. LOL. Because robots will prove themselves. LOL. I know it sounds crazy. And you seem to believe it too.
But remember this comment when you witness a human falling in love with a machine. It's so obvious that's it's going to happen that I feel as if I'm wasting my time on this topic when more important ventures like defending myself if a super-intellgent robot finds itself upset with me happens.
It's a joke now. But it won't be soon.
Hope that helps. Happy researchinhg.
Wilfried,
Could you be more specific or perhaps give some examples with regards to your assertion on "associative and fuzzy manner which can not be managed by an algorithm".
Arturo
All that you mention were first created (conceived) by human mind not by a wonder machine.
elementary stated (in case of computer misuse)
in pseudo code
inc.
issue a (or next) command.
if
misuse of computer like device = detected
override every security
plug off the system.
go to inc.
Is the above awareness synonymous of consciousness?
Valadez,
you wrote: "Self-conciousness is easy to create for a machine" and " To build a human mind is essentially simple".
What do you mean with such sentences? I would be very interested in more information about it.
Wilfried,
I define consciousness as ``this instrinsic presence''. Only conscious being can associate meaning to this sentence. We on a constant basis assess that other animals and human beings have also this presence. It is not important how we do that. So a better test than a Turing Test would simply to ask ordinary people wheter or not they feel if a particular machine possess such a presence. I do not think that anyone here is this dialogue ever felt like that vis a vis a machine. If this happen then it would become unethical to destroy this machine. Lets call this test: The presence test.
Wilfried Musterle,
Forces OBVIOUSLY exist. Just because we can't see them doesn't mean we can't comprehend them.
If you look up animism, you might come to the conclusion that every "object" is self-consicious to a degree. Maybe not to the degree that humans are. But when one hits a rock, it does respond. And it sounds crazy, but the rock is self-aware to a degree in that when the rock is hit, the rock receives information via it's sensors which are not as advanced as those of humans, but still do exist to SOME DEGREE.
Some people believe that only humans have feelings. Some humans love their dogs and plants more than they love humans.
I'm simply saying that magnetism, gravity, and all the other forces exist. And somehow humans are relatively complex machines that are SELF-AWARE.
If you need proof. Simply LIFT your hand and feel the "magic" of self-awarness. Self-awarness is simply the force that most people underestimate and assume ONLY humans can have. Watch your hand via your eyes. And experience your survival algorithms in action. It's REALLY that simple. Like magnetism. It just works. Who knows why? But it works.
So take that "magic" and apply it to machines. Humans have algorithms. Survival algorithms.
Building a human is a simple matter of building algorithms that tell it to drink water when it's sensors say ther is not enough water in the body.
The "magic" doesn't like in the mechanics. The "magic" lies in the force of self-awarness. Don't ask me to explain it. Except that I think existence exist because it's impossible for nothing to exist without something.
It's like when people say there is a universe with a boundary and say nothing exists beyond the boundary. But nothing is relatively STILL something. So being that nothing exists, essentially nothing cannot REALLY exist.
Self-awareness and survival circuits are the simple key to building a human.
Look up Ray Kurzweil. He thinks somewhat like I do.
And look up animism. To assume humans are the only MACHINES that can be self-aware and capable of survival is simply naive.
I'm not assuming I know if your assuming. But I hope this answers your question.
Wilfried,
I think the proper name should be a test of assesment of the phenomenal presence of the other being (machine, animal, other human being with a clinical condition. A score of 100 would mean a certainty of the phenomenal presence and a score of 0 would mean a zero probability.
One can't percieve color if their sensors aren't capable of perceiving color.
Subjectivity is based on objectivity. And vice-versa.
But ultimately it's the physics of the machine that determines how it perceives any phenomena.
Their are animals, also considered very complex chemical machines, that have sensors far superior to that of humans.h
Spacial perception or vision, I don't think, can not be the only reason machines might have conscienctiousness. Some people are blind and conscientious.
Qualia is irrelevant to intensity or reality.
Some animals can't sense changes in color. So imagination and qualia are only as effective as the MACHINE that is perceiving.
Whether it be imagination or not. Imagination can be just as powerful as external INPUTS. INPUTS are INPUTS. regardless of if the inputs come from within the mind or without the mind.
For this reason dreams can be as powerful or even more powerful than "REAL" experiences. Even though reality and dreams are both VERY "REAL" experiences.
Neurons: Snapsys and dendrites are essentially outputs and inputs made of electrochemical MACHINERY.
So to assume the visual system is responsible for all conscienciousness is not accurate in my opinion.
Valadez
You wrote {I'm simply saying that magnetism, gravity, and all the other forces exist. }
Yes, they are exist. but how we know that they are? From their AFFECTS. We can not believe something without seeing it's affects. When you are talking about rocks, thit is just a assume and hypothesis.
And
You said {Building a human is a simple matter of building algorithms that tell it to drink water when it's sensors say ther is not enough water in the body.}
I'm compeletly disagree with this.
In your example (drinking water) there are many factors which affect to this act. Like emotion, arousal and etc. That we called them Intra/extra organismic mechanisms. If we act just by algorithms what's the difference between us and a machine?
When all humans reaction(physically, biologically and emotionally) to one situation in difference ways it means, we are not acting by algorithms.
If we act by algorithms, this will be like what behavioural psychologist said, R-S consociation, Which is very difficult to assume.(For me. at least)
Regards,
Mahdi
Constantine--and sorry I have not yet had the chance to read all the other comments and will catch up shortly but I have a link for you to read: http://www.usatoday.com/story/tech/sciencefair/2013/08/27/human-brain-remote/2709143/ in this article you will see two humans connected to the internet via electrodes on the head. One of them is thinking of something--say "lift the finger"--and the other one, who is connected through the internet only via brain waves is lifting the finger. The two cannot see each other and are not in direct communication.
I think what I am trying to bring to the surface is that we are biased by our own level of understanding what today's technology can do and few times (if ever) do we try to think out of the box enough to see what might be out there.
If I can control what a stranger does by my brain waves, what does it say about my consciousness versus his or hers? And here is where I would like to come full circle to my earlier quest: please define consciousness. It seems that the meaning of ti is changing every day and we must keep up with the new AI and new understanding of the human mind--and not just at the level of probing from the outside but at the biological level of probing from the chemical interactions that cause specific behaviors and also what we call consciousness.
Arturo, you mention emerging behaviors in AI:
"I do think that there is more to programming than that. As a program gets added lines of code the possibility for side effects become more pronounced. If the side effects are added to learning algorithms then the algorithm can have behavior for which the programmer never meant for. This type of side effect is what I used in the Neural network Trojan (while I used it for an attack I am working on exploiting such behavior for learning). I can also design algorithms that have stochastic behavior which can exploit the concepts that I used in the attack to randomly modify the behavior of such machines. Under such circumstances it is an emergent behavior for which the programmer never meant."
I want to add to your comment a bit since you are very correct as far as I can tell and this is the most important factor, but first:
Many of you probably ask by now why I am involved and who am I and since I have never done my introduction but just joined by an invitation, let me mention who I am: I am a mathematician and have a PhD in NeuroEconomics in addition to 2 masters, one in business and one in engineering. Yet I am not in academia anymore for the very reason this discussion is taking place.
I was (and still am) determined to resolve the gap between academic knowledge and business applications of the same, which seem worlds apart. I have been doing a lot of research in the fields of the Internet since its birth, computers since they have started to turn into PC, and human behavior on a hormonal level. I feel qualified to get involved with this conversation albeit I can see that much of what you mention I know nothing about--I am not a psychologist and so many things I don't know and go straight over my head. I apologize for that..
So if I make a comment you find arrogant, please ignore and just understand that it is ignorant rather than arrogant. :)
Now back to the original comment of Arturo:
When I was at Stanford, the whole essence of our education was to get the advancement of such computer behavior (neural networks) which we later at Visa credit card company where I worked as an executive, found extremely useful. The algorithm you mention is the one that allows you to buy gas at one station but not at another one within an hour in a different place. It is the one that "learns" your behavior and catches the outliers that are not characteristic of you and stops your credit card from being used until clarified by security. That technology has indeed been in existence for a long time. I also noticed someone above referring to a book to read that was printed in 2006. It is the 2nd half of the year 2013 now... we have come a long way both in AI and human behavioral research since then.
Now in terms of computers: today's computers are NOT all without consciousness. There are computers that use bacteria or viruses to transform information and those are organic substances. Since computer algorithms learn, and now we have organic elements within our computer framework, it is only a matter of time before we get AI that thinks and feels and have dreams. Look to the future rather than search the past is my motto. What is yours (not Arturo but in general)?
Constantine,
Aren't neurons electro-chemical circuits? What makes it so different from a SiGe transistors or any other type of VLSI circuit? Both can be reduced to a decision system that are based on complex rules. In the end, when you wake up and either you put your shoes or have breakfast first is a matter of if then else routines based on electro-chemical interactions(which have complex coding and decoding mechanisms but the decision at the top layer is if-then-else). The only difference are the materials, the implementation and complexity, nothing more, unless we again fall into a Cartesian dualism.
Angela,
You bring a very strong point to the table. The field of AI and machine learning are growing every day, especially in machine learning. The field has a coherent theory called statistical learning theory which has grown since Vapnik basically framed the problem. The field of neural network as learning machines had its ups and downs since the perceptron. We now have today very powerful methods including deep Boltzmann machines, among others. One thing that is usually bizarre and unacceptable to people outside those working in machine learning is that these algorithms are not merely if-then code( though in the end we use them as a classifier, and in this sense it is not contradictory with previous statements on my part) but pieces of code that can provide generalization capability if trained correctly. These trained algorithm show extremely complex behavior which given time to make bigger ensemble can rival those of biological networks.
The field is growing and not citing recent references can lead to some conclusions that are not up to date. Also biocomputing is a field that is blurring the differences brought up in terms of the materials used.
It depends on what you call consciousness. Can a machine abstract, self-monitor, show creativity. All of these things are currently possible with AI software. Does the machine dream when you turn it off - I doubt it.
I am so much in agreement with you Arturo! I am shaking your hand in my imagination. :)
I would also like to enhance your fantastic explanation of today's AI systems with my main research area's findings that actually show an amazing similarity between how computers or people are making decisions.
Prior to my field of NeuroEconomics, I don't believe much though was given to how cells behave in the brain with their neurotransmitters other than that they were there and that they released this transmitter or that but we never really understood--and still don't fully understand every instance of the "why".
NeuroEconomics researchers have shown amazing examples of how the prefrontal cortex is doing statistical logical calculations with the currency of dopamine to evaluate the distance from the error of prediction and reward the recepient accordingly. Hence the choice between coke or water for lunch is driven by such logical calculations of the brain and not by "free choice" as we understand what that is--I don't actually understand "free will" and believe it does not exist since the prefrontal cortex makes the decision of my drink way before I reach for my chosen drink. It is not "my choice" but the choice of brain algorithms based on chemical signal exchange of reward.
If we compared AI and the brain now with this information in hand, I can confidently say that the essence of consciousness must equally be possible to be present in AI, only we have not yet looked at the right place or the right way, the same way as we have not looked at the right place and the right way in the brain for hundreds of years... we will find it eventually!
Rick, when you go to sleep are you turned off?
I don't think so. Your central nervous system is fully in operation. Your heart is beating, you are breathing... it is very easy to think that by going to sleep you are turned off but you are just as up as before only you are "tuned out" and enter a different stage of being up.
You are turned off when you die and I do not think you are dreaming at that point anymore.
Rick,
I would not compare dreaming in a computer with turning it off, since you are not killed and resurrected each morning(at least this does not happen that I know of). The equivalent process in machines for dreaming is that of Throttling a process which would be in charge of reorganizing databases, files, etc.
Arturo
partly agree.
But what about the "ghost"in the "machIne"
We do fall into Cartesian dualism we where and are in "Cartesian dualism"
Angela
Control is there you are right since 1947 when Nobert Wiener coined "Cybernetics"'
as the "Science of control in man and machine" Wiener's vision of cybernetics had a powerful influence on later generations of scientists, and inspired research into the potential to extend human capabilities with interfaces to sophisticated electronics, (of the time) such as the user interface studies conducted by later programs. Wiener changed the way everyone thought about computer technology. After a couple of years
he wrote "God and Golem" the impact of science on religion and vice-versa.
Golem is an artificial human being in Hebrew folklore. Golems began as perfect servants, whose only fault lay in fulfilling their master's commands too literally or mechanically..a Holistic approach to the subject of Machine Consciousness is still ,.to me , far away we are just so far tackling the issue by Reductionism
A machine without subjectivity has no consciousness. Any claim that a particular kind of machine is conscious must give a principled account of how the machine realizes subjectivity.
ARNOLD,
Subjectivity approaches embodiment is it that what you mean?
Hi All - sorry I'm late - traffic!
I did much of my research at Carnegie Mellon University in Pittsburgh Pa with Prof. Patricia Carpenter. It can be argued that the whole strong A.I adventure starts at CMU with the work of Alan Newell and Herb(ert) Simon. The groundbreaking work that they did in intelligent systems helped to structure the research program and to define its aims. Their work convinced many in the field that the brain MUST be a type of symbolic processor or Turing Machine.
It is a testament to the openness of CMU that they invited me there to work on a thesis that some might argue was the antithesis of the work of their Nobel Prize winning alumni.
Although we can structure our minds to effect a succession of cognitive states to simulate an algorithm, it does not follow necessarily that we are 'algorithm' machines.
My research points strongly to the possibility that all complex living organisms are organised as fractal processes of catalysis. Consequently, the brain is considered to be a macroscopic catalyst.
Within this model consciousness correlates with the transition state of a macroscopic process of catalysis. The agent (or mechanism of catalysis) corresponds with a macroscopic quantum coherent soliton (perhaps similar to a BEC).
The metaphysical implications of this theory are interesting. Within the Fractal Catalytic Model quantum mechanics is not invoked as some sort of computational advantage, quantum mechanics is invoked because of its basic ontology. In other words, conscious states are correlated directly with the relational spatio/temporal components of the wave-function (quantities are obtained through analysis - see the reference below - 'How long is a Piece of Time').
The theory implies that the so-called 'physical universe' is only implicit in a set of structured discontinuities (cracks) in space and time. The only things to be accorded true ontological status (i.e the only things that really exist) are the the Delta quantities associated with the wave function - energy/space/time....
In short - the wave function is a conscious function.
So, from this perspective building a conscious machine has never really been the problem - there is nothing but consciousness to build it from!!!
The problem is not an AI problem - intelligence has nothing to do with consciousness. Consciousness is an ontological problem.
If we want to build a machine that is both intelligent and conscious then it has to work the same way as we do and involve macroscopic quantum coherence.
Davia, C.J (June 2006), "Life, Catalysis and Excitable Media: A Dynamic Systems Approach to Metabolism and Cognition", in Tuszynski, J.A, The Emerging Physics of Consciousness (Frontiers Collection), Springer, pp. 255–292, ISBN 978-3540238904
Vimal, R and Davia, C.J. How Long is a Piece of Time – Phenomenal Time and Quantum Coherence – Toward a Solution (2008). Quantum Biosystems. Ed Pregnolato, Massimo
http://www.psy.cmu.edu/~davia/mbc/
PS - If you didn't laugh or groan or wince or something at the first line of this answer - you might be a machine!
LOL.. I must be a machine since I completely agree with you and did not laugh or groan.
Welcome Christopher to this group--am I still the only female here??? Maybe I am a machine. The funny thing is you could never tell if I am or not. And this on its own underlines your statement that I am about to highlight: "quantum mechanics is invoked because of its basic ontology. In other words, conscious states are correlated directly with the relational spatio/temporal components of the wave-function..." and onward everything you said is super
Thank you for adding these valuable comments. I particularly like your comment of "the wave function is a conscious function" since light is sometimes wave and sometimes particle and neither is a person or organic yet is conscious, which makes it amazingly complex and extremely exciting. Now I wish I got my PhD in Nuclear Physics instead of NeuroEconomics... :) Very exciting stuff!
Thanks,
but as far as my metaphysics goes - it's probably as flaky as the next guy's
There is one aspect of the theory that I think you will appreciate - it is a very simple theory and you get a lot for your money - it is economical. Not only do you get an explanation for the extraordinary robustness of living systems, you get a model of cognition thrown in for nothing!
Now I can't say fairer than that now can I?
Chris
Constantine: "Subjectivity approaches embodiment is it that what you mean?"
No. Subjectivity is a particular kind of representational organization. Subjectivity is realized by an entity with an internal representation of the volumetric space in which it exists, and which has a fixed "point" of perspectival origin within its representational space. You can think of this coordinate of origin as the self-locus. See "Where Am I? Redux". I don't think that a machine without this kind of internal organization can be said to be conscious.
Totally agree Christopher and our recent financial crash proves your point.
All of the standard economic models fail to incorporate the "human as a living organism" and they only see it as automaton following some logical rules based on gains or losses (totally Boolean). Automatons fit the theory devised many years ago but it is not applicable to humans. It just simplifies the calculations such that some of the basics can be captured.
I actually have a very fiery article published (unfortunately in French but the English version of it is available in SSRN: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1113226) about the failure of the standard economic theories and also the failure of those fighting the new methods in understanding what is missing and why.
No theory is perfect (that is why they are theories and not laws) yet many people seem to consider theories laws and fail to even try to challenge it. I am glad your research is challenging it!
Some of the missing elements in theories and models created by the old theories in Economics is that not only do they exclude consciousness and herding but they also exclude something especially basic and Darwinian: risk and uncertainty. Risk is statistically defined but in the human brain risk is defined by chemicals! Very important difference!
When we fear risk (lion in the bush), certain chemicals kick into action initiating a ton of processes that all aim at saving our lives. This requires the "sensing of risk" though.
When they created financial instruments that reduce those risks to be invisible, the conscious mind (similarly to a machine or even worse than a machine) could not see the risks and the associate warning system of adrenaline and other hormones did not kick in to warn the human mind of the risks since it saw nothing... So much for human consciousness versus machines. :)
Constantine,
"We do fall into Cartesian dualism we where and are in Cartesian dualism". This is were we disagree but this is a philosophical debate, which I think there are pros and cons of each posture. As long as we agree that this differing view is what is the difference and not other scientific points then I think we reached agreement.
If one does not have a soul, can one have consciousness?
Thank you for the illumination, Christopher.
Susan
Susan: the answer is yes.
Glad to see another female researcher on board!
Do cats have soul? The other day I saw a mom cat holding her kitten down with one paw and spanking him with the other... Is that soul + consciousness?
When an elephant dies, its family members revisit its grave and mourn over it sometimes for days. Is that soul and consciousness?
Single cell organisms (as far as we know) have no "soul" as we define soul but they definitely have consciousness. When I place a flute (a one cell organism) in a test tube of fresh water; it settles. When I blow fresh water at it, it first closes up for the first blow and then stays open for consequent blows of fresh water. But if I blow in dirty water it finds toxic, it first closes as before (period of evaluation and decision making), then for the second blow it picks up and moves to a place where the blow cannot reach... does the flute have soul and consciousness? It doesn't have a brain yet it is making a definite choice and decision of conscious level.
Biocomputing has similar findings.
I think we are ripe for a new definition of what "soul" and "consciousness" means because--at least to me--it lost its meaning based on its old definition.
ARTURO,
What's so bad in Descartes mind body dualism in modern perspectives..
Has anyone asked whether we actually "want" machines to have consciousness? Do we really want it to be aware of what it is designed to do? I could imagine a whole series of machines that would instantly require counseling an coaching services to keep it from falling into a deep depression once conscious of its purpose and role in the world. We would have to have a mechanism that ensures the consciousness does not interfere with its core function which in turn is probably best achieved by the ability to turn the consciousness off, which in turn raises the question why did we make it conscious to begin with?
Achim,
Your comment reminded me of an incident that happened during a seminar I attended:
During the seminar someone said that the mission statement of the AI group at MIT was to build a machine that it would be immoral to turn off.
Someone suggested that the mission statement should be re-written stating that their aim should be - 'to build a machine that it would be immoral to turn ON!'.
Christopher, good one, but wouldn't the right mission be "NOT to build a machine that would be immoral to turn ON"?
Constantine,
I believe that it brings some contradictions see for example Eric Dietrich's very interesting answer to my question in:
https://www.researchgate.net/post/How_does_dualism_counter_the_argument_of_violation_of_conservation_of_energy_in_the_philosophy_of_mind
I still feel that besides the some argumentative contradictions, many people still hold Cartesian dualism as part of their core beliefs, and while I do not share the philosophical stance I will respect their beliefs. That is why If I feel that some points are touching core beliefs I will try not to touch that point further unless asked.
Hope this helps
Achim,
I don't have a particular moral stance on this one because I don't think such a machine is possible - however, it does raise some very interesting issues and ideas.
For example, could we take a page out of one of Azimov's books and formulate the three laws of Ethical Artificial Consciousness Construction.
I propose the first to be:-
Any constructed Artificial Consciousness must be made in such a way so as to feel happy.
Constantine,
The flute example is not determinism. It is not "just" cause and effect. The flute actually "made a decision" and as long as you blew fresh water in, it "recognized it as harmless" and hence "made a decision" to not move. When the substance blown in was toxic, it "made a decision to change" but only after it repeated and it "understood" the dangers may be permanent or repeat--it in fact conducted a statistically complex calculation without a nervous system (as far as we understand).
But look at this from another point. Assume the flute, a single cell organism, made all these changes as deterministic. Then by deduction (and sorry but I am a mathematician and logical deduction is my favorite subject) the following is true:
We all know that we all--including you--are made of single cells (like the flute). If the action of one flute is deterministic, then by deduction the action of a group of flutes would also be deterministic and hence, by deduction, an animal that is made up of single cells like the flute are all deterministic. The animal called "human" is made up of single cells that create larger organisms within but are basically single cells (a neuron in your brain may go from the back of your head to the tip of your toes but it may just be a single cell) nothing more than a bunch of deterministic cells?
I think this calls for a bit more brain storming since determinism is obviously untrue in this case.
Angela: I agree with your conclusion that the definitions need to be reviewed as we are seeking to apply them to a new context. Not to mention, we are seeking to apply concepts to a new context that as far as I am concerned we are far from having grasped fully in their original context.
I could not agree with you more Achim! And I find it awesome that here we are, a major multidisciplinary group trying to grasp the basis of something we always took for granted and now has gotten turned up-side-down and we must start from step 1. I find it fascinating and glad to be part of this awesome conversation. ;)
I just feel sorry for Mahdi since we have certainly taken this conversation to a point where the answer to his question is becoming more elusive with every exchange :) Sowwy :)
Dear Achim,
Don't feel sorry!
I'm really glad to see these interesting and exciting comments. I' more glad because of that my question create this group.
ActuallyI think the porpose of these questions and the ResearchGate is what we see here.
These comments show maybe it is too soon to answer my question. Because we have concept problem, it is not obvious which area or regions of brian create consciousness and what consciousness is.
The concsiousness in human is not clear yet, we want to create concsiousness in machine! What a ambitious scientists we are!!
Still there are a lot of questions without answer, but I don't want to stop this group. Just read my comment and ignore it!
Go Ahead ambitious scientists,
Best Regards,
Mahdi
Christopher,
"Any constructed Artificial Consciousness must be made in such a way so as to feel happy."
What a crazy idea!
But we do not know what a machine makes happy. It might be fatal for us.
Wilfied.
Good point!
I guess the first law should be refined a little.
This discussion brings to mind an incident at a seminar that I once attended at Sussex. During the seminar it was mentioned that someone had said that the goal of the artificial intelligence/consciousness project at MIT was to create a machine that it would be unethical to turn off. I retorted that it might be equally argued that the goal of the project at MIT was to create a machine that it would be unethical to turn on!
To put simply
Can a machine laugh out of the blue? NO! so why argue?.-