No, human consciousness is an accumulation of experience and knowledge that is stored in the mind and evolves over years. Constructing a neural network is not enough, it must be programmed with all the information that an adult mind has accumulated over decades. Even then, I doubt the computer has enough sense to realize it is a computer or that it is cognizant of itself.
No, human consciousness is an accumulation of experience and knowledge that is stored in the mind and evolves over years. Constructing a neural network is not enough, it must be programmed with all the information that an adult mind has accumulated over decades. Even then, I doubt the computer has enough sense to realize it is a computer or that it is cognizant of itself.
I also think it is not possible. It is good to remember that the accumulation of experiences and knowledge needs to happen according to the values of a society.
Vilemar Magalhães
The answer is NO because humans don't have the power of the wisest creator (God/Allah) to translate each simulated human neuron accurately into a computer system to effect human consciousness. Even if it is tried, the resultant consciousness will lack the human element.
Nope !!!! This takes us back to what is "Brain dead" ...
Ok , neurons need to be fired for responses , the firing is done by a stimulus , is the stimulus predated and then what is the reaction else how is response going to be ????
Who has taught the response , some babies cry when they fall , some walk about as though nothing happened , some cry when get attention ...
For psychological responses if is genetic how early will it kick in ???
So , even a machine needs to learn and store ,we have damn good memory (every cell I believe ) . machines we need huge storage or encrypt it .
Second the variants - how do we train machines ...
Fundamentally - Machines do what we ask them to do , with AI there is a greater level of automation with jitters at times.
So , people can claim we can , I doubt it to the exceptional precision in which nature has embodied us with (all of nature )
It would have to be connected to sensors (nerve endings) to receive data. (This is hot!)
It would have to be connected to muscle fibres to act on information. (Move that hand away quickly!).
A baby only becomes conscious of itself when it sees its feet wiggling in the air. It is only in childhood that the person becomes conscious of how the person begins to fit into the world.
Aparna, You don't know how hard it was to bring up my children and grandchildren! I have edited puberty->childhood. Thanks.
Not really Prof !! I have no kids and so I still feel my Mom holds the key :)
Hi everyone,
This will be partially my speculation and partially based on some neuroscience findings.
Neuron firings are just tools, the real deal is the tissue that is formed among neurons. The brain tries to minimize the amount of neuron firing which is expensive by forming tissue between neurons that are activated together. The fire together wire together idea is just an outcome of this process. The brain is trying to make connections/tissues because a tissue can be used all the time once it's created, so it is cheaper in terms of resource efficiency. When we first "learn" things, the brain is a mess, lots of neurons fire here and there. If we continue doing that thing, some structures emerge and it gets easier and easier. You can think of anything you are able to perform. Walking, talking, playing piano etc. Once all of them were incredibly hard that we can barely perform one of those things alone. Now we are able to walk and talk and observe the surroundings etc. at the same time. This is due to the tissues.
So, if one can imitate the activities of all the neurons, it means nothing. If we can also copy the tissues among those neurons, we can really talk about duplicating human "mind". After all, seeing an "orange" essentially means a certain activation pattern in the brain and the activation is based on the neural tissue which connects certain regions and neurons together. It is more or less like callus formation on the skin. That is just a natural response which bridges the environment and the individual by making the interaction stronger, easier and more efficient.
Wait.. it is really true that a biological neuron has been accurately simulated in a computer? I don't think so.. neurons are really complex cells with tons of molecules that even calculating exact Van der Waals interactions would be impossible today. The current simulation of neurons is a simple abstraction of all that complexity, assuming a neuron simply has a number of simplified electronic entries (0 or 1 for input) and firing outputs at the same level (0 or 1 again for output). Complexity is assumed then to be captured by considering a lot of these abstract elements (artificial neurons, behaving like this) and so conforming a computational network composed of these mathematical abstractions. We can't expect then consciousness can arise from that... even accepting that we know exactly what consciousness really mean. This is still a very debatable topic, e.g. the hard problem of consciousness. However, another interesting (maybe more) question is whether the simulation of biological neurons is the only way to achieve consciousness?. Best regards.
With less modeling and simulation details, the robot Sophia shows a significant amount of human behavior and artificial consciousness. Therefore, my answer is yes, but I guess that the complete modeling of the human's neural system is near the impossibility due to its infinite mix of normal and abnormal reactions.
I don't think that accurate simulation of every neuron in a human in a computer would result in human consciousness. This is because the function of the brain is determined not only by neurons but also by cooperation with other organs.
Definitely no, because what makes something 'human' is not just the brain nor its composite 'neuron;' neither is it the factor merely of 'consciousness.' The essence of the human is beyond capture.
This is essentially an ontological problem, even if every neuro was simulated by a computer, that would still be simulated human consicousness, not human consicousness proper.
By definition, it would be machine consciousness, rather than human. It could replicate human consciousness very closely, but if it's a "not-human-being" type of machine, it's still not a human. Maybe you think this is a cop-out.
I'm saying here that even humans are machines, in some ultimate definition of the term. But at the very least, we can say that a human "machine" is different from a primate "machine," or different from a plant, and certainly different from one of today's computers.
Data, in Star Trek Next Generations, was much like what you describe, Kirk. And yet, it was not human, and its consciousness was not "human" either. Much the same with the holographic doctor in Star Trek Voyager.
No,
Human can behave differently in same situations.
example, if a person got valuable on road, he may submit it to authority or may not
and
if next time he got valuable on road he may react opposite to his previous act.
so it is always open.
I would rather believe it's not the case, emulation of neurons and neural networks (synapses) doesn't mean emulation of human behavior and conciousness
This curious question calls to mind the plots and premises science fiction stories and invites reflection on multiple levels. My brief answer is that the complexities of electronically emulating biological processes (including emotions), social awareness (including awareness of self and other), ecological awareness (awareness of self in environment), the chronosystem (not just history and/or predictive analytics) are such that I doubt whether mere mortals have the capacity of achieving this in such a way that it would also permit emergence of 7 billion unique AI individuals.
The simulated neural network can’t replicate human consciousness for several reasons, the main of them are as follows
1. We do not have yet an adequate model of a neuron; resulting mathematical model of a network of such synthetic neurons will be unrealistic.
2. The computational complexity of the problem surpasses the capacity of the current and anticipated computers.
3. It is quite possible that the efficiency of calculations will be not sufficient to simulate consciousness in a real time. In order to simulate a second of consciousness we will need one hour of the computing time. The discrepancy of the time scales will make a synthetic consciousness irrelevant or at least will pose a question how to transform between two different coordinate frames. I suspect that the form of this transformation will be rather complicated.
4. The neuron network simulated on a digital computer is not as efficient energetically speaking as the actual neurons. Even with the perfect mathematical model of a single neuron the excessive heat release will interfere with the state of a network making the synthetic consciousness to be different to human.
So many of these answers seem to take an interpretation of the question that allows trivial negative answers. Let’s be a little more generous in the interpretation of the question and see if we can have some reasons and arguments instead. I will take the question to mean “Assume we could simulate the neuronal network and the state of that network of a particular human being at a specific moment in their life. Would that network exhibit consciousness?” This is rather like asking “if a person losses all sensory inputs would they be conscious?” but focusing on only what happens in their neurones.
If consciousness arises only from the interactions between neurones, then the newer must be ‘yes’. If the answer is ‘no’ then there must be other factors that in determining consciousness.
Note I am ruling out answers based on ‘we can’t (yet) create such a simulation’, ‘it wouldn’t be efficient’ etc. That is covered by the ‘if’ part of the question. In fact I am ruling out questions about how exactly the simulation is achieved. Personally, I also rule out arguments of the form ‘it wouldn’t be human consciousness by definition’ as just being attempts avoid any discussion of just what is meant by consciousness whilst preserving some notion that human beings are in some way unique in principle.
So, if the answer is ‘no’, for what reason? The best reason I have seen so far in this discussion is that consciousness involves more of the body than just the neurones. Although I would like to see an argument as to why I should think this e.g. an experiment which suggests that the self awareness of the conscious mind involves some other organ than the brain.
No because consciousness depends on hydro-ionic waves that cross living tissue. The exclusively neuronal simulation cannot reproduce these waves because neurons are separated by the synaptic cleft and electric synapses are not sufficient to carry them; glial cells and the extracellular matrix are necessary; see:
Article Astroglial hydro-ionic waves guided by the extracellular mat...
No. Even if we could burrow a perfect mathematical model for a neuron from the aliens, we would need to simulate memories and self-propagating stimuli perfectly to achieve such a level.
Currently there are around 7.2 billion human conciousnesses on planet Earth. Which one would we compare it to?
The answer from an enactive perspective is no, because of the neural considerations sketched by Alfredo Pereira and others, and because consciousness is seen as an emergent phenomenon, arising from the interaction of living beings (individually and collectively, each organism carrying its evolutionary heritage in its cells) with their environment.
There is far more involved here than a neural network: we are looking at complex organisms interacting with even more complex environments, all in a continuous sequence of changing states, spiraling through processes of germination, development, reproduction, and decay.
Artificial intelligence (with emphasis on 'artificial') is a fascinating field (of which I know little), but it is surely something very different from the consciousness experienced by human and nonhuman animals.
The die-hard 20th-century metaphor of brain as computer and mind as software, reflecting centuries of mind-body dualism (and even older soul-body dualism), seems to be at odds with developments in cognitive science and related areas of inquiry over the last quarter of a century.
I offer the following as a possible response to Will Harwood's invitation. It seems to me that a key question is "What is 'self-knoweldge' and how is it gained?" Our answer to this question is shaped by our assumptions about human nature. Many today accept the premise that human beings are biopsychosocial beings. If we accept this premise then knowledge of self is gained through observation of self and others, by experiencing emotions, and being in relationships. In other words, self-knowledge is holistic in character and is not limited to the functioning of one biological organ.
Perhaps a story will serve to illustrate what I mean? Some years ago one of my children travelled along with some friends from a city in Canada to a major US city to attend a music concert. Everyone who was at the concert knew the music of the band and everyone knew they were in a city with a history of violent crime. My child and their friends were walking along the stadium concourse when all of a sudden everyone heard a rat-a-tat-tat-tat-tat-tat sound. The Canadian youth kept on walking. A group of American youth who were also in the stadium concourse fell to the ground and covered their heads. Both groups heard the same stimulus and each reacted differently; one with a fear of being shot, the other with excitement to get into the concert. More important than what may be dismissed as the "conditioned responses" of both groups is what the Canadians took from this experience. Specifically, this experience alerted them to the fear the other teens lived with on a daily basis. In addition, this factual assessment was attended by a sense of awe for the American youths' concern for each other, an awareness of everyone's vulnerability in the moment, and a sense of relief that the sound had come from a drum set and not a weapon. Put simply, it was the process of observing the behaviors of others and comparing it to their own that provided the Canadian youth a new level self-knowledge.
If we return now to the question at hand. Computers may be programmed to logically assess situations and may one day be able to recognize human emotions for what they are. The question however is to what extent is it possible for a computer to gain self-knowledge based on its experiences with humans and then to adjust its behaviors (programming) as a result of this new knowledge?
Can Computers Become Conscious, an Essential Condition for the Singularity?
Por:Logan, RK (Logan, Robert K.)[ 1 ]
INFORMATION
Volumen: 8
Número: 4
Número de artículo: 161
DOI: 10.3390/info8040161
Fecha de publicación: DEC 2017
Tipo de documento:Article
Resumen
Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware of one's perceptions and/or of one's thoughts, it is claimed that computers cannot experience consciousness. Given that it has no sensorium, it cannot have perceptions. In terms of being aware of its thoughts it is argued that being aware of one's thoughts is basically listening to one's own internal speech. A computer has no emotions, and hence, no desire to communicate, and without the ability, and/or desire to communicate, it has no internal voice to listen to and hence cannot be aware of its thoughts. In fact, it has no thoughts, because it has no sense of self and thinking is about preserving one's self. Emotions have a positive effect on the reasoning powers of humans, and therefore, the computer's lack of emotions is another reason for why computers could never achieve the level of intelligence that a human can, at least, at the current level of the development of computer technology.
No. Consciousness is emergent and cannot be reduced to their individual parts.
Absolutely yes. Many of the respondents don’t seem to appreciate that the question is a hypothetical conditional: if every neuron in a human was simulated in a computer. It presumes 1) that it is possible, which we don’t know, but we are asked to assume that it is possible, and 2) that it is a human; that is, an existing human with existing life experiences, with all of their propensities for action, behaviour, and thought. As long as there was continuing stimulation - here, I presume, simulating the full input nerve stimulation of a human - which implies a full simulation of an external world; plus the registration of intentions to act in the “world” correlated with changes in the nature of the stimulation received from the world. Both of these further conditions are themselves not possible, at the moment, but are they not possible in principle?
If every neutron was replicated, exactly, and the other conditions were also met, then all inputs to the simulated system would result in the same states as the “real” system, if that system was exposed to the same stimuli. Moreover, this would include the learning of new things, even the learning of fundamentally new things, because learning, like all mental phenomena, is encoded in neurons and their interconnections.
In this particular hypothetical, the computer simulation would exhibit human consciousness. It wouldn’t necessarily be a human, that depends upon definitional choices - for example, if we define a human as having a biological body, then it wouldn’t be a human. But we can define things any way we want to: as long as the resultant definitional system is consistent.
There are no knockdown philosophical arguments against this position, just the bare logical possibility that this position is not true, which implies that the position is not logically necessary (see here the profusion of ”zombie” arguments). But since when are scientific questions decided in this way?
Instead of answering the question, I'll add more current debate.
In fact, thinking about this question is particularly interesting given a recent news (https://www.technologyreview.com/s/610743/mit-severs-ties-to-company-promoting-fatal-brain-uploading/) regarding a company, Nectome, which is trying to (citation) «preserve your brain well enough to keep all its memories intact: (...) If memories can truly be preserved by a sufficiently good brain banking technique, we believe that within the century it could become feasible to digitize your preserved brain and use that information to recreate your mind». They use a technique called «Vitrifixation, also known as [ Aldehyde-Stabilized Cryopreservation], [which] has been demonstrated to preserve the connectomes [all the connections called synapses between neurons in ... brain] of animals, a promising first step towards demonstrating efficacy of connectome preservation in humans. Vitrifixation’s efficacy at preserving biomolecules will be studied in the future». (In https://nectome.com/)
Consciousness in not memory. We can be conscious and missing memory, for sure. But to have memory, the explicit one, we need to be conscious. So, if consciousness involves more than neurons, wouldn't memory of past events require more than neuronal activity as well? If so, then they are missing a part of the picture. However, to know if consciousness involves more than neurons, that premisse needs to be tested. I suspect we have still some years ahead before we can give an answer to it. Artificial intelligence, how some have pointed, would be a promising tool.
Dear Robert, Please read my answer above. There are scientific reasons against the hypothesis. Can philosophy contradict science?
Alftredo - Much as I am impressed with your work I think it only right to draw out what is being said a little more clearly.
First, the criticism of neurone level modelling is really against what we may call the classical neurone model’ based on Hebb neurones. Replacing the Hebbian model with neurones based on your own investigations would seem a viable proposition for a simulation and, presumably, would lead to a conscious state. So the response addresses the possible failures of a particular neuronal model
Secondly, the notion of consciousness addressed in your work is very particular and involves ‘feeling’ as an essential part of consciousness. To quote from your paper “The consideration of feeling as the mark of consciousness contrasts with the main tradition of modern philosophy and contemporary cognitive science.” Now I accept that you may argue for the priority of the definition of consciousness that you use but I think you need to be clear here, as you are in your paper, that it is not what might be called the ‘main stream’ definition of consciousness that is being used and, indeed, is one among many possible definitions.
I think we need to consider the similar arguments that have been made about cloning. If we clone a human being would it be the person we cloned? Well in terms of physicality yes, it would be an identical copy but in terms of being the person, absolutely it would not.
Human conciousness is barely understood so replicating it is impossible. We know that genetics plaus some part in personality and that experience plays a major part in it but we do not have any reliable way of measuring either. It is therefore impossible to determine any form of prognosis.
We do not have the technology, the understanding or the insight to obtain it at the moment.
This question is undoubtedly fascinating but it falls into the category of "is there a god?", "is there a human soul?"
Well probably...on the other hand maybe not!
Considering the recent development of AI, I believe it is possible.
Consciousness seems to be an emergent property that occurs once living cells reach a rather advanced development. It took billions of years of evolution on our planet to achieve this. I am assuming that a computer simulation is not involving any living cells. As such, I do not think it would be possible for such a computer to truly have consciousness. It would only simulate or mimic consciousness, but not truly have any self-awareness or true subjective awareness. I am still convinced by philosopher John Searle's Chinese Room thought experiment.
Dear Will:
Many thanks for your attention!
Yes, if the strong AI builders use my model they can implement artificial consciousness! However, they would need other components besides neurons.
The centrality of "feeling" that I propose is non-orthodox in philosophy and cognitive sciences, but not in the medical area. The main signal of consciousness during general anesthesia is feeling pain. We can also interpret Nagel's "What it is like to be" (and any kind of "first-person perspective") as feeling. The concept of 'qualia' is translatable to feeling, as when we say that we feel the taste or the smell of food. We do not say that we "feel colors", except in poetry (e.g. in afro-american popular music, people say they "feel blue"), but it would make sense if we did. I address these semantic and philosophical issues in other papers and chapters.
I return to the question I asked above, which conciousness does this future artificial creature emulate? Will it experience bonding with its 'mother'? Will it express affection and anger? Will it empathise with other collections of artificial or naturally evolved collections of neurons?
Will it have ambition and aspiration? Will it believe that there is a higher being? Will it be an atheist? Will it be indolent, arrogant and rude, will it develop a craving for intoxicants?
Human conciousness is not a collections of neurons and cells. We do not know waht human conciousness is even though we have experienced it as individuals for our whole lives (at least from the day we were born) and our species has experienced it for hundreds of thousands of years.
Who will teach this collection of wires and rare metals to behave itself and what should that behaviour be?
What kind of conciousness will manifest itself in the absence of morals?
Dear Kirk MacGregor
W/R to your original Question:
Perhaps not. But, good general AI WILL have consciousness (it is a very important aspect of functionality) and, I guess, that would often include consciousnesses some hypothetical human(s) could have. In many regards we may want AI to have consciousness that is qualitatively better than most humans and, perhaps overall, have consciousnesses better than any human -- at least for its many purposes (
No for thousand of reasons. One of the oldest and simplest reason was provided by Leibniz in Monanology section 17:
''Moreover, we must confess that the perception, and what depends on it, is inexplicable in terms of mechanical reasons, that is, through shapes and motions. If we imagined that there is a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter it, as one enters into a mill. Assuming that, when we inspect its interior, we will find only parts that push one another, and we will never find anything to explain a perception. And so, we should seek perceptions in the simple substance and not in the composite or in the machine. Furthermore, this is all that one can find in the simple substance—that is, perceptions and their changes. It is also in this alone that all the internal actions of simple substances can consist.''
To say that neurons can be perfectly simulated is to assume that a neuron is a kind of machine and that their collective being into a human brain is as a consequence a machine. Leibniz's argument denies the possibility of consciousness, examplied by ''perception'' for any type of machines. Having denied the possibility of consciousness in machines, Leibniz recommended searching it in simple substances, those that are not composite, not machine-like. I do not follow him there. I instead give up using the machistic ideology. There is not a single reality that is machine-like although all of natural science is necessarily machine-like. This is not a contradiction but simply the irreductible separation between language and reality. We can't even describe an individual electron's behavior (this is simple substance) with a machine-like model , how it it we could provide a machine-like model even remotly close to an Eukaryotic cell (3 billion years of evolution) or a Neuron or a brain which is the most complex reality in the known universe). If we did not even model the simplest how can we pretend we are on the way to do it for the most complex realities? These are not machines, the electron, the Eukaryotic cells and their organisms that are gigantic colonies of eukayotic cells are not machines. The only machines that actually exist are those we built and they are not like the living that have naturally evolved in Nature.
I believe, no. If every neuron in a human was accurately simulated in a computer, it would not necessarily result in human consciousness, because consciousness refers to your mind and your thoughts.
Dear Mahfuz Judeh
I agree with you to the extent that "If every neuron in a human was accurately simulated in a computer, it would not necessarily result in human consciousness". And, that is because experience and interaction are involved in making neuron connections and determining what their function is (it is not possible to see their adaptive functionality as inherent in the neurons, of course). An AI robot, otherwise can have significant mind and thoughts, as needed in its sub-areas of human simulations. Consciousness does NOT require being everything like us (as some say or clearly imply): the reason: one does not bring forward everything one is in every situation; if things were made to depend on THAT you would have a sick and poorly functioning system.
Human neural networks are not designed or constructed, they evolve. For that reason we can conclude that it is not possible to 'simulate' such an impossibly complex random pattern.
We really ought to stop this obsession with drawing analogies between human conciousness and the number crunching of a computer. While calculations are a process of thought, thought is not a calculation.
We might just as well suggest that attaching wheels to our grandmothers would make them wagons.
It is not even possible to simulate complex protein unfolding, let alone an eukaryotic cell and a neuron. But lets remember that what we are conscious about is the state of our body (not the brain) and the state of our interaction with the reality of the world outside our skin and none of that is part of the alleged simulation.
Dear Barry Turner and Louis Brassard , respectively
Dear B.T.: While it may be " not possible to 'simulate' such an impossibly complex random pattern ", it seems clear to me we are not talking about "impossibly random patterns" and, given no arguments from you, we just have your characterization and assertion.
We are talking about an AI robot doing what we need it to do in a WAY like a human (and developing and "learning", like a human IN THESE AREAS NEEDED, as well) -- does this sound "random" to anyone? Maybe it somehow makes you feel good to declare the situation and make a supposedly-related proclamation. (YET: I DO admit that starting from the perspective of neurons IS seemingly certainly NOT THE WAY TO GO; I have made my proposal to General Artificial Intelligence people: see: https://www.researchgate.net/project/Developing-a-Usable-Empirically-Based-Outline-of-Human-Behavior-for-FULL-Artificial-Intelligence-and-for-Psychology ) (That proposal has NO focus on neurons whatsoever -- how would anyone try to think from such a standpoint??
Dears Mohamed EL-Shimy, Harry Barton Essel and Aleš Kralj
The robot Sophia has been at México, the whole impression on this kind of machines seems to me disappointing and only apt for the wide and non specialized audience or "pedestrian scientists".
It is not a problem of electro-mechanical simulation, let's say that finally humans have the hardware construction abilities to simulate the whole complexity of the biological organs from which concience emerges, even more, that human being has reached the biopoiesis as R. Paniker supposes and too the automatically self replicating neural networks mentioned by the foreteller Minsky. The resulting machine's behaviour or performance will not have any real resemblance with human concience. Even not with the sick concience that we know in diseases such as schizophrenia, autism or paranoia. This "homme machine" (peeped by Julien Offray de La Mettrie) will be more similar to the feral childs as Victor d'Abeyron (1800) or Kaspar Hauser (1828) -that both never developed language-, or the babies under affection and language privation of Frederick II of Hohenstaufen, Jakob IV of Scotland or the mogol king Akbhar Khan childs. I.e. sufffering of the Hospitalisme syndrome of René Arpad Spitz.
Human concience evolves exclusively within a human normal environment. The amerslang signs speaking primates, chimpancees or gorilas (as Koko) do not exhibit a real human concience -having naturally animal autoidentification- and its comunicative advances are obtained exclusively in close, warm, contact with humans, as it happens too with cetaceans as the dolphins -nevertheless do not forget the case of Tilikum (Orsinus orca) that killed its trainer-.
Babies without affection and without language communication die within few weeks (as the terrible experiment of Frederick II has shown).
The Frankenstein proposed by AI is by no means possible.
In the best case -to see the robotic abilities as posed in the Asimov's tales- this artificial intelligence must co-evolve with human intelligence for thousands maybe millions of years. If it would be the case that a human constructed device will adopt the minor trace of concience. Much human suffering will arise thinking AI in soft terms and lightly.
Plain, it would be of certain value to remember that Homo sapiens sapiens vanished the Homo sapiens neanderthalensis' intelligence from the face of the Earth.
Warm regards
Dear Brad,
You seem to be in the habit of calling someone else arguments as ''proclamation''. It is convenient since you do not even have to respond to the argument. You simply proclaim that is not one, you proclaim it to be a proclamation. Easy debating trick. So instead of doing like you, I will continue provide arguments.
I did say that we are conscious of the state of our body. By that I means things such as hunger, things such as a pressure on our articulations, etc. We are visually conscious of of a lot of surface in the word around us and all that can't simply be simulated just on the basis of the neural connection, the actual radiation on the surfaces of the retina are necessary. I know it can easily be provided to the simulation but the question ignore this. The alleged simulation necessarily need to simulate the actual interaction of the body with the environement given that our consciousness is a lot about this interaction. This was my argument and not simply a proclamation.
I do not find anything worth commenting on your second paragraph.
It depends on the nature of the simulation and on the nature of the hardware. A computer simulation of a hurricane won't get you wet. But a simulation of consciousness-causing neuronal states in a computer that uses biological chips* ("wetware") might be able to achieve something close enough to the biological substrate that causes actual consciousness.
* https://newatlas.com/cyborg-biologically-powered-chip/40815/
Also see my previous answers to related questions:
https://www.researchgate.net/post/Do_you_think_it_would_be_possible_in_the_future_for_artificial_intelligence_to_accurately_translate_languages_in_complex_sentences
https://www.researchgate.net/post/How_can_AI_have_emotional_intelligence
You also need sensors so that the neurons could be conscious of something. To have human subjectivity, you also need objects. Human subjectivity is also determined by the metastable state of their organism , called homeostasis, which requires continuous efforts to ensure sustainability, together with a self preservation instinct. All this is very different in a computer. Finally, are neurons conscious or is it something else? That is in fact the crux of what you are asking.
@Harry Friedmann RE: you also need....
Sure. Computers need all sorts of things, but they can be equiped with sensors and made mobile, like our brains; just put the CPU in a robot or android and send it out into the world. Without that they'd probably be like a very young infant or like a patient just waking up from surgery, conscious (awake and ready to process or receive) but not interestingly conscious. But they'd be humanlike and conscious, which was all the question asked about. The consciousness of a Gödel or a Gershwin obviously would require longterm and ongoing interaction with the environment (or at least with The Matrix 😉).
Presumably if we can simulate human conciousness we can build ourselves a computer God. Why stop at human consciousness? If we can do that we can build an omniscient omnipresent entity that can make us a new world in six days.
It does not have to be the Hebrew God, we could of course simulate the whole Egyptian, Greek and Roman pantheon and have them compete for ruling the universe.
And lets not forget the Devil! We can simulate him too and then we can have a 'simulated' Armageddon.
Funny, I've read through the above answers and quite frankly, I am appalled by the quality of arguments made by many contributors. A good portion of the answers are along the lines of "No, because you can't simulate a neuron." Which of course is totally beyond the point. The question was "If every neuron in a human was accurately simulated in a computer...". Nobody stated that it would be possible, but not even following the tiniest but of hypothetical thinking is IMHO disqualification from the scientific world.
And of course the question links to the Bieri Trilemma. If you assume that there is an entirely physical basis ("neuron") of consciousness, then you need to make an assumption regarding possible causal relationships between mental and physical processes. The same happens, if you assume that the basis of consciousness is purely mental: How does the (mental) consiousness interact with physical substances if you subscribe to the view that the world of physical substances is causally closed, i.e. there cannot be mental causes for physical effects? Said Trilemma has been formulated 35yrs ago and I am at a loss to explain why many of the answers don't seem to know about it.
Some answers point out that you need sensors, and the rationale seems to be that without sensory input the consciousnet would be "not interestingly conscious" or even not sustainable as sensory input is a necessary ingredient of consiousness. I would say that this may be true, but it is highly speculative. What happens to a patient with a locked-in syndrome, deprived of all sensory input? Does he lose consciousness immediately? Is it impossible that he would build on memories, ideas, language? Reflect on his sad state of being?
I don't see why this would be necessary.
I agree, Aleksandar,
that the distinction is necessary. But I still maintain
- that speculative thought is a vital ingredient for scientific progress. What one calls "thought experiment" cannot and shouldn't be discarded from the scientific tool chest
- that many of the answers above are exactly speculative. And expose many logical errors at that (petitio principii, circular reasoning etc.)
- ignorance about the basic conflicting assumptions in the theory of mind disqualifies someone from the discussion. That doesn't mean that you need to know about the Bieri Trilemma in particular, but the basics of, say, the Wikipedia entry on "theory of mind" I would expect from anybody venturing an answer. Knowing what already has been thought and established in the scientific community is as important as the right method.
Maybe, Aleksandar,
we can improve the quality of the discussion if we ask people to answer which of the three assumptions they hold true. Then at least you can start to argue.
Here's my invitation to all participation. Please answer which of the following statements you think are true:
1: Mental phenomena (e.g. consciousness) are non-physical phenomena
2: Mental phenomena have causal effects on physical phenomena
3: The area of physical phenomena is causally closed, i.e. every physical phenomenon can be explained entirely by physical means
Please indicate your answer by 1:Y/N; 2:Y/N; 3:Y/N.
Please understand that 1:Y;2:Y;3:Y is self-contradictory.
Link to the original question:
Your answer should contain 1:Y if you think that a perfect simulation of, say, the human neurological apparatus including sensory input could not create consciousness.
If every atom in the universe was accurately simulated in a computer would it result in a new universe?
Dear Kirk,
Your question „If every neuron in a human was accurately simulated in a computer, would it result in human consciousness?” is not clear either in biological or philosophical sense.
1. The human itself before of the computer contact may have had consciousness.
2. What kinds of stimuli, how long, how intensively?
3. Have any data about the imagined situation?
4. Consciousness is more than the sum of uncertain stimuli.
5. I like this question.
András,
maybe this thought experiment might help: Assume you could, with an ultrafast, superhigh-resolution scanner (of whatever technology), map the entire brain: Every neuron, dendrite and synapse plus the current state of excitation. For good measure, you do this also on the entire neural system.
Now simulate the brain in a computer. Would there be thoughts like "Where am I? What happened? Why is it dark? Why don't I feel anything? How can I reconnect to the world?"
Yes or no?
Thank you Aleš.
May I ask you to please explain how on the one hand side mental phenomena can have causal effects on physical phenomena, while at the same time every physical phenomenon can be explained by physical (=non-mental) means? This seems to be a contradiction.
If
MentalPhenomenonA -causes/effects-> PhysicalPhenomenonB
then
the explanation for PhysicalPhenomenonB must consider to MentalPhenomenonA, which is in contradiction to 3:Y .
Dear Louis Brassard
You OFTEN simply conceptualize "things" without reference to, or ANY relationship to, good evidence, direct or indirect -- the latter citing the linking empirically verifiable experiences/processes (THAT linking TO the directly observable overt phenomenon). I see no clear overt, directly observable evidence, well-specified, verifiable, referred to or indicated in any cogent way in many of your statements (such conduct is not reasonable or sensible NOR are its "product(s)"). (This is the very basis of unclear, poor, confusing, non-constructive "communication".) THEN, thus: Some version of this statement is all that is needed as a response. (And, seeing persons who do as just described as "just asserting or proclaiming".)
When I do not have clearly related evidence, the correct response is simply to note the need and perhaps outline a way to get such evidence (all based on things that can be hypothesized AND tested and on things we already know). This is mainly all I do. (This is why I say very little specifically; but, better that, than the "alternative".)
If you cannot "walk this line", I suggest good disciplined study (no philosophers -- most of them do things wrongfully, basically -- in notable parts -- as described above).
While I agree in principle with Dirk that abstract thought, or 'thought experiments' are a useful tool in scientific endeavour we need to ask if they have boundaries .
Is any abstraction acceptable? Is any conceptualisation useful? How does endless navel gazing contribute to scientific endeavour?
If we enter into a boundry-less universe of what if's, will we eventually achieve actual manifestations of our own thoughts?
I would suggest no!
If every neuron in a fishes body could be accurately simulated in a computer would it be able to swim and breath underwater or would it just 'think' it could?
If we day-dream endless about time travel will we be able to go out for dinner with Julius Caesar?
There is one good observation we can make about conciousness however by engaging in this entertaining pastime. It is clear that we do not have the faintest idea what it is and all the neuron simulations for all time are not going to get us any closer to knowing. .
I am sorry, Aleš, I misread your quoting my alternatives as answers. Now I got it. But I still do not understand your answers. If, as you say, mental phenomena are part of the physical world, how can it be that they don't have any causal effects there? You answered "2:N".
However, you can consistently claim 1:N,2:Y,3:Y with your above interpretation, which leads to a position which is monistic, reductionist and materialistic. Nothing bad about this, of course, but there are some implications, e.g. this world view is almost necessarily deterministic and you cannot explain, for instance, free will other than being an illusion, a post-hoc explanation of predetermined processes.
Back to the point: Yes, of course this position forces a clear YES to the original question.
Mental phenomena inhabit a part of the physical world we do not understand. Clearly they are a function of electrochemical activity in the brain and no one has ever demonstrated non-corporeal thought but nevertheless thoughts, aspirations, dreams and emotions are far more than neurons firing in sequence at the response to neurotransmitters.
The fact that we are able to think abstractly demonstrates his. It is possible for a human to conjecturalise impossible worlds and to postulate theories as yet untested. Humans can relect not only on their own life but on past lives of others. Humans can imagine lives to come and even the end of the universe. The question that we are all responding to clearly shows us that thoughts are far more than reactons to stimuli.
Returning to the question can we simute a human in a computer. We already do computer games are now spectaculaly realistic. The graphics are only a few refinements away from being indistingusiahable from a real person. We can have rudimentary conversations with computers and they can do a huge range of tasks for us. They already simulate humans.
I recommend that all read about Baudrillard's simulacrum, when the simulated becomes the real. It gives a real insight into how our perception deceives us.
Roger Penrose expressed some interesting ideas about the eukaryotic cell.
Simulation of brain using the computer model of a neural network is one of the most computationally intensive problems of our time. In the recent paper: “The quiet revolution of numerical weather prediction” by Bauer et al. (Nature volume 525, pages 47–55, 03 September 2015) we can read:
“As a computational problem, global weather prediction is comparable to the simulation of the human brain and of the evolution of the early Universe, and it is performed every day at major operational centres across the world”
Unfortunately, the simulation of brain is much more complex, and it is still in the primordial state. Modelling of a single second of the conscious time of the brain can take about one hour of computing time, even on a supercomputer (and this is only for a small part of the brain).
Speaking in terms of the computational complexity in meteorology, the brain modelling is on the level of simulation of a single cloud — very far from the weather, and unattainably far from the climate.
Fortunately, before the technology will offer us a powerful quantum computer to simulate the brain, we can still speculate about the mathematical aspects of this problem. For details please see the recent discussion
https://www.researchgate.net/post/Can_we_mathematically_model_consciousness
The most popular answer in the above thread provides some additional reasons why it is unlikely that we can simulate the brain. Exploring the work of Turing, Gödel and Russel should indeed provide some additional insights.
Basically, Janusz, you answer the question "What if we could do XYZ?" by saying "we can't do XYZ", i.e. by not giving an answer.
This question is not about the technical know how required to simulate a human neural network, it presumes that. "if every neuron in a human was accurately simulated..."
What is beings asked is "would it result in human conciousness?"
At the very best it would result in a machine that was capable of developing human conciousness.
The future geek with his human mimicking computer would then have to expose it to experience. Our personalities (the outward manfestation of conciousness) begin to be formed in the first few weeks of life in resonse to stimuli. Affection from other human beings is crucial to this.
The human simulacrum would then need friendship as its conciousness expanded. Humans are gregarious animals, we evolved that way over millions of years.
Then this artificlal intelligence would need to develop interests. This is a curious mix of pre-existing influences and adopted interests from other humans, usually parents and friends. Our simulacrum would need constant company of others to be able to do this.
Human conciousness is subjected to constant satisfaction and frustration that also shapes our ability to compute decision making.
Being able to express affection as well as receive it is also a major factor in human personality/conciousness. The simulated neurons would need to be able to do that.
So! It is not about whether we could construct this object at all. The mechanics of the thing are just bits of electronics, we probably will be able to do that in the not too distant future.
Human conciousness however is not an object or a product of technology. This is not a new conundrum of course, Descartes dreamt this one up before compters were even imagined. Maybe we ought to build a simulation of the pineal gland and see if that works.
Dear Dirk,
Thanks for your explication. The trouble is that the thinking hypothesis, you described, should be experimentally tested. Long time ago I read an utopian sci-fi novel on a human like society where the individuals were but enormous brains without any body parts placed in a well preserved cave system. Their life was only artificial stimulation of their brains by robots.
I would like to know whether the original question is a technical (scientific) or a moral one? Is our mind or soul a sum of “appropriate” stimuli?
Dear András,
you wrote that "the thinking hypothesis, you described, should be experimentally tested". But how do you test that anyone or anything has "human consciousness"? How do you test that the human beings around you have consciousness? Does it go beyond anything that you could test about the computer simulation (assuming the computer simulation would provide you with "behavioral data", as observing humans would)?
It seems to me that from a third-person standpoint, there is no criterion that an approipriate simulation could not satisfy; indeed, a perfect simulation of every neuron would not be necessary at all - a simulation of observable effects would suffice.
However, from a first-person standpoint, what kind of data could you even admit for testing for consciousness? Probably: none, because you cannot take the first-person standpoint of the simulation (or anyone else but you, for that matter).
Dirk, thank you for your concise opinion. According to the study reported by D. Song (2008): “Non-computability of consciousness”
https://arxiv.org/pdf/0705.1617.pdf
many aspects of consciousness are not computable in the Turing sense. The direct answer to the original question is thus negative.
The advances in computing will contribute to research in the area of Artificial Intelligence, but it is unlikely, that the future technology can change conclusions based on the fundamental mathematical reasoning.
Yes....first if we exactly simulate as a human brain....producing exact human brain is very hard but to crack.....
Aleš, thank you for your excellent comment and the link to the highly relevant book. The change of the paradigm in computing from the Turing automaton to the evolutionary computer should be explored further in other areas of computing. In the mean time we can supplement the original question by adding the specification of a computing device:
“If every neuron in a human was accurately simulated in an evolutionary computer, would it result in human consciousness?”
With this formulation my answer will perhaps shift to Yes...
Thank you, Janusz, for providing the link. This certainly adds substance to the discussion. However, if I understand the paper correctly, it doesn't argue that all conscious phenomena are non-computable, but rather that some are.
Frow where I sit, this doesn't help much in the light of the original question. Don't you agree?
"Neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:
"Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality."[7]" (link)
https://en.wikipedia.org/wiki/Mind_uploading
"would it result in human consciousness?"
This includes self awareness which includes an ability to determine hierarchies, sense of self worth, empathy and value.
Maslow's hierachy of needs will not apply to the simulacrum but it may make the mistake that it does. Conciousness involves the misinterpretation of input data and stimulus. What is it that collections of neurons say when they make mistakes? "well I am only human".
Ah ne'er so dire a Thirst of Glory boast,
Nor in the Critick let the Man be lost!
Good-Nature and Good-Sense must ever join;
To err is Humane; to Forgive, Divine.
Alexander Pope, An Essay on Criticism, Part II , 1711
Thank you, Talib, for the link. The important part is, of course, the "we believe".
I might add that people with different beliefs are rather unlikely to become (or keep being) neuroscientists.
My answer is a cautious "Yes", the correctness of this answer depends upon two words of the question: "every" and "accurately". Currently, as well as I know, not every types of neurons have been discovered and their functions accurately deciphered. It is in principle possible that some kind of currently unknown, specific "grandmother"-type neurons might create consciousness (e.g. conscious visual experience). It is also possible that software alone cannot fulfill the job, perhaps we would need special hardware to ensure the necessary highly parallel execution.
Dirk, After checking the proof I agree with your opinion. I agree also, that the proof of non-computability is not an ultimate indication that the synthetic form of consciousness can’t be simulated.
In order to advance a little further, let us consider a wiring diagram of a brain. According to the review of the problem presented in the recent MIT technology review:
“The human brain contains some 10^{10} neurons linked by 10^{14} synaptic connections. Mapping the way this link together is a tricky business, not least because the structure of the network depends on the resolution at which it is examined”
The text is available at
https://www.technologyreview.com/s/602234/how-the-mathematics-of-algebraic-topology-is-revolutionizing-brain-science/
It is evident that in order to create even a conceptual map of connections in such a system we have to use a mathematical theory.
The technology review contains also a link of the paper available at arXiv:
https://arxiv.org/abs/1608.03520
where the subject is described in full details with many really nice graphs. Perhaps by analysis of the formal structures discussed in the above paper we can prove the existence of a synthetic form of consciousness.
Hi Janusz,
let's do some simple math, OK? If I take your figures, than you can encode the wiring of the brain, by a huge table, one row per synapse, with two columns: the codes for the giving and receiving neurons. These codes are pretty much 4 bytes, so ech row would be 8 bytes. There would be 10^14 rows. One Terabyte is 10^12 bytes, so we end up with 800 TB to encode the wiring. This is certainly too little, as we'd need to know the types of synapses and stuff, but Petabyte is the order of magnitude we're talking about (give or take). Which is a fraction of the data Google handles per day.
It certainly is a lot and will continue to be intractable for a decade or two, if you want to process it at the speed of the brain, but it doesn't seem to be intractable, does it?
Having said that and assuming you're OK with a fast forward to 2038: Do we expect consciousness, because consciousness is something that is "just a property, an abstraction" (as Ales has put it)? Or because consciousness "emerges" automagically when you have such a complex system?
Or is something else necessary? Something trancending the material base? Or at least introducing a random effect (quantum-whatever)?
That's what the question boils down to.
I think we have to distinguish between human consciousnes (which will most propably only emerge from a certain environment and might be destroyed by copying the brain to a different environment) and other types of consciousnes that might be adapted to this (most probably different) environment.
Also I think that there is a difference between copying just the neurons (which might not work at all) and the entire brain (or even more parts of the body).
Hi Daniel, above I described a thought experiment: If you would scan (ultrafast, high-res) the neuronal structure of, say, my brain. Right now. And re-create this digitally, what would happen. In this experiment there wouldn't be any need for consciousness to "develop" by interaction with an environment. If you subscribe to the view that consciousness is purely the result of physical processes (or a particular abracting view on these), then the question boils down to "Would this simulation exhibit consciousness (and memory of sitting in front of the keyboard, typing)?"
Hi Dirk, I do see that this would create a human consciousnes for some time, my point is: it would most probably degenerate (or evolve) to another type of consciousnes with the time as it is adapting to the new environment.