Computers were originally designed around a brain model, and at one time were called electronic brains. Since then there has been some discussion as to whether they resemble brains or not. Today we have more information about how the brain works than ever before: does it show that the brain is like a computer or does the computer model fail in some way?
Dear Prof. Graeme Smith,
Thanks for your valuable question and the same time thanks to all respondent for interesting answers.
I am interact with the answers @Joachim Pimiskern, @ Richard J Wilson, @ Wilfried Musterle and @ A. Mosavi
It is my belief that the computer model fails at the elementary memory level, where it assumes that discrete place-coded memory is the basis of the memory system of the brain. As far as I know Fodor's objection to this in his book "The Mind doesn't work that way! [2002] still stands today. As far as we can tell and with the best neural networks designed to simulate the working of the brain, there is no way to implement discrete place-coded memory in a neural network.
Graeme,
could you please explain what you mean by "place-coded memory"?
Regards,
Joachim
By Discrete Place-coded memory I mean memory where every individual storage element has it's own unique address. For instance structures consist of substructures made up of even finer substructures until finally every element has it's own address. In neural networks there is no unique address for the finer sub-structures past a certain point.
Yes, but it can be simulated by the brain. If the brain needs
addressable memory, the effect can be produced by
treating the address as a part of the pattern.
For example the loci method of the ancient orators.
A sequence of places was learned that functioned
as addresses, and the places were associated with
whatever they liked to remember.
Regards,
Joachim
Never say never!, I think it is possible to simulate Discrete Place-coded memory. I know that actual computer neural network models don't do so, but who says that you can not change that?
Joachim, place-coded means that each location of the data has a unique address.
While there is the potential for some addressing, there is no evidence that the brain uses place code addressing at anything below the neural group level, which means that there is lots of elements in storage that are not addressed. Even at the neural group level there is evidence that suggests that the address coding is degenerate and therefore redundant or at least non-unique.
The neural group you indicated in your drawing was context accessible not addressed per se. True you can include an address in the context but that does not change the nature of the access.
no..... Because still now 30% of brain process only revealed
So from that it self we failed.. Similar scenario in AI also ...
Thank you..
Graeme,
I agree. Connectionists sometimes avoid the term neurons;
rather they use the term 'units'. A unit might be 1 neuron, or
a big group of them. Since single neurons may die, and the
functioning of the remaining network must not be affected.
Here is an example that in some cases a single neuron
can be addressed:
http://www.eurekalert.org/pub_releases/2010-10/ciot-cic102610.php
The next question is: if not single neurons can be addressed,
is it possible for a group of them? Last year a study was
published showing that each neuron has its place in a 3D
coordinate system (Van Wedeen et al., diffusion tensor
imaging).
http://www.newscientist.com/article/dn21647-human-brain-organised-like-a-3d-new-york-city-grid.html
http://blogs.discovermagazine.com/notrocketscience/2012/03/29/the-brain-is-full-of-manhattan-like-grids
http://www.sciencemag.org/content/335/6076/1628
Regards,
Joachim
Yes, there is a 3D structure of fibers running through the brain but the presence of the network does not imply addressing at the individual neuron level for signals, just placement of neurons strategically in an epigenetic manner.
It's a matter of information versus access.
Neurons can be informed of information without it affecting their access. Although it would seem that your context accessible neurons have an implied address to include in their context.
It still doesn't offer discrete place-coded addressing at the individual element level. Multiple elements can be "Hidden" within a single neuron.
The problem is that we tend to read too much into this type of thing, and miss out on the fact that the mechanism doesn't jibe with the computer models discrete place-coded addressing.
Disregarding our cultural disposition to conceptualize the mind through analogy to whichever most advanced technology exists as the time, there are important philosophical, as well as empirical issues which bring into question the adequacy of the computer as a metaphor for cognitive functioning.
Thinking about the brain like a computer led cognitive scientists to posit that it is modular, with distinct components operating independently (informationally encapsulated) on purely abstract, formally defined symbols. We know now, however, that the brain processes information in a parallel and highly dynamic fashion, with both feed-forward and backward information flow, and on representations which are stored in both modality specific, as well as multimodal areas...
Philosophically, Searle and Harnad have made some important observations about the problems of equating the brain with a computer, or the mind with computation. Here is a transcript of a brief and informal talk given by Searle on the topic: http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html
Thank you Nikola.
We must be careful not to fall into the humoncullis fallacy. Even if the infinite regression problem has been dealt with. But I had not realized that syntax requires an observer. An interesting talk.
I see human brain is powered by unlimited power of soul where as computer is powered by limited power and has limited capacity. I have read about few people who can do multiplication of two numbers each having around 50 digits in a time much lower than the computers we use can do.
The difference is one is limited whereas the other is unlimited (can compare with Turing Machine).
We are able to partially mimic some aspects of the functioning of human brain for solving problems. Computers are still far away to mimic a human brain. Artificial neural network is more like a toy! We are also not able to understand how a brain works. So there is a huge research challenge in bridging the gap to develop power problem solvers or to hep humans.
Yes Ajith, that is one of the reasons why computers are not equivalent to the brain.
Harshadkumar, you seem to be confused as to which is most like a turing machine, a von Neuman processor or the brain. von Neuman processors are designed to operate like UTM's (Universal Turing Machines) while the brain is not and operates under a completely different set of rules. As a result, the 50 digit limit you suggest while limiting is not the same as the nearly unlimited number of digits that a computer can use.
It is comparing apples and oranges which is why your statement seems confused.
The question becomes given a difference in capabilities, and a difference in function can we expect some future computer to be able to simulate the brain, and if not, why not.
Harshadkumar's unlimited power is one reason often given for not being able to simulate the brain, but that begs the question of how that unlimited power exists if computers can overspeed the brain doing comparable things. As Ajith says computers can't do everything the brain does, but should we try to make them do so? Or is there some intermediate position where the simulation becomes good enough that it won't matter that we are not doing exactly the same thing.
I see human brain v/s NN as following.
Neural Network that we learn in computer science has neurons that are programmed (assigned weight externally) by some algorithm. It is that algorithm that programs neurons; neurons themselves do not know "for what solution they are being programmed". Moreover, we have to tell to NN that these are the neurons, you learn (establish mapping) with those. Moreover, you also tell it to use particular algorithm for programming (assigning weights).
Whereas in human brain, neurons can program themselves. The neurons might not be needing any external algorithm. The brain can chose which algorithm is best for particular problem and even can generate new algorithm when there is a need. (which I do not think any computer can do). Moreover, whenever more than one tasks are assigned to brain, the neurons themselves adjust for how to solve or how to keep more than one mapping in so called infinite neurons in brain.
I think scientists won't be able to make any computer that can mimic all aspects of human brain simultaneously, though a few aspects can be mimicked. But, ultimately it is going to be creator (the human being) who can make such computer and the creator always knows the loopholes of such powerful computer being created by him.
The power of human brain is unlimited means unlimited. Comparison is not possible. One has infinite power and capacity and the other has finite and limited power and capacity. It would be kind of comparing oranges or apples with plastic. Plant seeds of either oranges or apples v/s plastic and see what happens. That is the difference.
Harshadkumar:"Whereas in human brain, neurons can program themselves. The neurons might not be needing any external algorithm. The brain can chose which algorithm is best for particular problem and even can generate new algorithm when there is a need. (which I do not think any computer can do). Moreover, whenever more than one tasks are assigned to brain, the neurons themselves adjust for how to solve or how to keep more than one mapping in so called infinite neurons in brain."
Although some ANN's require external functions like back propagation to operate efficiently, many ANN simulations today are based on internal functions of the neuron. The fact that the neural structure needs a program to run, is incidental to the model. There is still research being done on how the brain chooses which algorythm is best for a particular problem and how often it makes mistakes doing so.
Computers can generate new algorythms when needed, if they are self-programming which many computers are not. Genetic Programming is one way of developing new algorythms at need.
I am not sure what "Infinite neurons" are, but they sound powerful. Most neurons I know about in the brain, have a large but finite number of connections. Measured in the tens of thousands of connections.
But in principle, a computer can simulate how a brain learns and functions, if
the neural network is a good approximation of the brain-like network.
A major difference between the two is that a computer is programmed by a human for a specific given task but a brain programs itself by learning autonomously for an open array of many tasks.
Yes Juyang, that is true.
Part of the reason why the brain is so much more flexible than the computer, is that it can learn multiple things at one time. The computer must be programmed for a single task at any one time. However this is because of the limitations of the technology involved not because of a theoretical limit to computing devices in general. We must be careful not to paint future limits into being with our theories.
Graeme,
While the question posed brings about some debate, it is curious that if we flip the question around I usually get totally different answers.Is the computer like a human brain. I would like to elaborate more on this flip side
The huge gap is due to a couple of reasons:
1) The markets chose CPU speed to accelerate single instructions as opposed to parallel computation (which might give way to more brain like processing). Though there were some early attempts at parallel computation such as the transputer, the markets chose single instruction systems
2) The algorithms for computers since the beginning were mostly serial in nature (just take a look at a basic algorithms class)
3) people wanted an extension of themselves and for the most part hate a substitute
Today, because of energy concerns we are turning to GPU's in supercomputers which are massively parallel processors which I think are inspiring new algorithm design to solve problems in a parallel fashion. This is a first move in the right direction (though the main reason is for energy efficiency) towards enabling the scaffolding on which to move computing towards parallel computation and build a subset from this into more brain like systems.
On the scientific and technical side:
1) We still don't know enough of the workings of the brain to build a system which works like a brain
2) At what level do we choose to model the system.
3) What computational platform do we want to use: GPU, FPGAs, etc.
4) Do we still want to use silicon based system or do we move to more biologically oriented systems (for example using DNA to assist in building systems:
http://engineering.stanford.edu/news/stanford-scientists-use-dna-assemble-transistor-graphene
another example are biological computers ).
I think we still have a long ways to go, but because of our current paradigm shift in supercomputing I think we are accidentally moving in the right direction towards parallelism. this will in turn provide some of the tools towards closing the gaps that remain between computer systems and the brain
Arturo, I am well aware that computers are not like the brain, Parallelism is one way of approaching a closer match, but the architecture of the brain is still quite distant from even the architecture of a massive parallel processor computer. Part of the reason that we don't do more parallelism is that compiler design is still oriented toward single path serialism so massive parallelism works best when it is linked to virtual machines, each operating as a serial processor doing a single path program.
This loses about 20% efficiency to overhead but makes better use of the processors than any attempt at parallelization after the fact.
Graeme,
I think that if we do not move in the direction of inherent parallelism we won't have the foundation to make computer programming simulations move closer to making machines more like brains. As long as we have this limitations the gap between brains and computers will be wide. take for example the simulation cost of N Purkinje cells. Even at the network level of simulation it will take tremendous amount of computing power (I wouldn't even imagine at the chemical process level).
The chemical process level adds significant processing load, but allows for greater reflection of the learning mechanisms to be found in neurons. One of the reasons that speed of processing is not wasted is that we can simulate neurons at the chemical process level and get them to operate at nearly the speed of actual neurons, in arrays large enough to be at least toy applications.
I have yet to see evidence that we have broken the design limitation that every neuron works just like the next one, so designing a purkinje neuron is as yet beyond our art. But I am not current on the state of the art.
I would be interested in looking at a simulation of a purkinje cell network, as I have a theory about how they interact with the PFC to serve up action sequences in the brain.
A computer is much like a brain with the exception that it does not have a social and intangible correspondence as the brain has: "its" mind. The computer model fails to portray this connexion, however lose it may be, "electrical mind" would be a dubious term. It does not sound right.
Today we wouldn't talk about Electronic Minds, we are after all beyond the day of electronic marvels, instead we would talk about e-minds or i-minds depending on whether or not the internet was involved.
Coco, thank you for your contribution, but I am not looking for the ultimate difference, just the individual ones that make the model tenuous.
For instance, no one yet has mentioned that at the thought level, the architecture of a computer and the architecture of a brain have much in common. That thoughts seem to be built on an automaton basis similar to a self-programming robot.
If we understand the difference we might be able to program a robot to think.
Min, I think it is a mistake to look for only one difference. that is why I asked How it is different not what is different. When you talk about computers not asking questions, I think you are taking standard P.C. type computers for granted. Query Agents ask questions. Quiz bots ask questions, decision support software asks questions, But most computers are programmed without curiosity.
It is exactly as hard for the computer to ask the second question as the first, unlike humans who tend to find it easier to ask questions once the subject is broached.
Aslan, although there is some confusion regarding quantum physics, there is reason to believe that Penrose was wrong. Most of the confusion comes from mathematicians rather tha physicists themselves, mathematicians who have a newtonian view of mathematics and try to impose that on quantum physics.
One of the reasons that Penrose was wrong, is simply he did not take into account the uncertainty in the systems and treated them as hard systems instead of soft. To iron out the uncertainties in the system takes quite a bit more mathematics than to soften the system and allow it to live with uncertainty. He posited that the systems would have to have more mathematics than they actually do, and so retreated to the quantum level.
The quantum level is a mystery to him, because it too is rife with uncertainties, and so he was willing to accept the suggestion of microtubules as the mechanism. This has been more or less proven inaccurate in that the types of inclusions needed for quantum computing on microtubules has not proven to be practical in biological systems.
Although it is too early to eliminate quantum level interactions as a factor in intelligence and consciousness, recent evidence linking consciousness to perterbations in the brain, has suggested that instead of a cloud outside the brain, consciousness can be found in functions inside the brain that are more complex as the level of consciousness increases. This flies in the face of Penroses assertions.
A classical Neural Network by itself cannot be inquisitive it requires the ability to express emotion, which lies outside the classic Neural Network paradigm.
Quite interesting a discussion. The whole problem, I think, lies in our ignorance on the way our brain works. Everyday we are discovering new things about it. We, however, can explain a computer, and any other structure, be it physical or not, we put in it. That's one difference. If, however, I should pick one single difference, I would stick to the amazing net of connections neurons make, and add to it its resilience to damage. The brain adapts very quickly and efficiently. And we don't know why.
I wonder, that you ask the question above.
The human brain is conscious about his existence and his thoughts. There is also qualia. These two phenomena are not measurable and we have no idea how it works...
Just a little quibble there is not such a thing as phenomenons phenomenon is a latin word the plural is phenomena no s'es involved. Wilfried I assume since the original question was mine you are talking to me.
Actually, I have a reason to ask it, new evidence is for a pseudo-serial architecture built on the parallel one, the serial architecture has a distinct resemblance to an automaton based on a computer. The differences might answer some of the questions of how we think and are conscious.
Perhaps I should be a bit more circumspect in the way I term things. Evidence was too strong a word. New is perhaps inadvisable because some of the basis for this has been around for a while. I work at the theoretical level so that is the level at which the "Evidence" has been noted.
It all starts with the discovery that there are two centers of control for Working Memory, a frontal control area and a hippocampal control area.
If either one is silenced, there is a drop in the effectiveness of Working Memory for a while, and then a recovery to near top efficiency as the other center takes over control. It never completely recovers suggesting that part of working memory is the ability to use both centers at once.
This discovery led to my eventual labelling the combined control complicit attention. Theoretically it is a mixture of both implicit and explicit addressing in which the implicit addressing recruits an area for processing and the explicit addressing supplies data to that area. This is not new, the original work was done back in 2009 and mentioned in a thread I started on "weak attention" on nature network.
What is new is the connection of the work to "thought" which has only recently been done.
Grame,
thank you for your little lesson in correct writing. I wasn't sure that in Englisch it is like in German language. Thanks for that.
May be the serial architecture is related to the experience of time as a sequence of sensual inputs. It reminds me of a 24 mm film which produces with a rapid sequence of images a seemingly fluid motion. This is due to the processing speed. Unfortunately, I do not know if there is such a sequencing of the sensor data in humans, where there ought to be a dead time in which you would have no perception of a sense organ. This might be measured.
Indeed there are mechanisms in the brain
to bridge interrupts in event sequences.
How event sequences are processed:
http://www.eurekalert.org/pub_releases/2011-08/cp-cb081811.php
http://phys.org/news/2010-11-mind-syntax-actions.html
http://www.eurekalert.org/pub_releases/2010-07/igdc-eah071910.php
http://www.eurekalert.org/pub_releases/2007-05/afps-bsh050107.php
https://www.researchgate.net/publication/10590479_Representation_of_action_sequence_boundaries_by_macaque_prefrontal_cortical_neurons
Regards,
Joachim
Article Representation of Action Sequence Boundaries by Macaque Pref...
There is a key aspect to consider; the time passed since the creation of the universe until we appeared. For me it makes sense that complex systems are created implicitally by a much simpler dynamical system that works through time. The point is that maybe there is no way to deterministically model a brain, since the variable time is also required. In that time, the system goes out of control but the result is achieved, a nice trade off. Makes sense with laws of the universe and life. If we can perform a simulation with something like a cellular automata using a golden rule, and achieve very complex intelligence as the one asked in this question, then we could think chances are this universe could work in a similar way. For the moment i do not know of the existence of such achievement, but it is a very interesting topic.
Cristobal, while it is possible that you won't be able to model the human brain, evidence of other complex systems is that they have superposition, in other words, there are distinct sub-assemblies within the main assembly that can be analysed to get an idea of what is happening within the larger system. The brain has these sub-assemblies too, it is just that they do not correspond to theoretical bits and pieces that we usually think of when we talk of mind. It is not that we can't analyse them, it is that our theory of the mind is based on an illusion of will, and the brain doesn't work that way.
@Graeme. The assembly approach indeed is useful in the way you put it. About the brain and mind, that seems a fundamental starting point. I imagine there must be an advanced state of the art related to these topics.
I want to share this news article about a really big proyect for simulating and understanding the brain. http://www.theguardian.com/science/2013/oct/15/human-brain-project-henry-markram
Yes, there are a number of super projects out there, it suggests that scientists are willing to bet that we can understand the brain within a very short timeframe.
I wouldn't go so far as to state that will is an illusion... it might as well be, of course. It might be that there is will, in the sense that we are presented with a set of alternatives and there is no way one can tell with a 100% certainty what our decision will be. This is an old discussion, as far as I remember, in Physics: do real randomness exist?
But I agree, we base our understanding of the universe on two so called facts: (1) time flows, and (2) there is true randomness. Both may be a result of our brain structures: an illusion created by our own limitations. But they might as well exist.
The point is that we don't know. And both models compete. I just wouldn't rule any of them out for now, for it might be that they do not really compete, but complement each other.
Actually Roman, the problem is that the "Will" model hasn't held up well to scrutiny, The point at which people self report of making the decision, comes too late in the process for the decision to more than set a pre-determined choice. Some experiments claim that they can statistically determine the choice up to 7 minutes before the decision point, and there is no conscious process associated with the early stages. By the time it becomes conscious the choice has long since been made, and all that is needed is the go-no-go signal. Further the whole house of cards around the attribution of agency has come tumbling down with the realization that attribution comes after the fact. The brain actually makes errors of commission claiming to have caused events "Will" couldn't have because they have been mis-attributed to self. I am afraid that will is very much an illusary effect not real conscious choice. But it feels so real to us that we can't see past the illusion to the way the brain actually works.
Allright. Now I think I understand what you mean by "will". And, please, correct me if I'm wrong, but you imply the conciouss will, that is, our self-reporting of "finally I've made up my mind". Is that so?
In that case, I fully agree... our self-awareness of our decision may take place far after the actual making of that decision. But that does not imply there is no will involved, in the sense that we actually make a non-deterministic decision, based on who-knows-what facts, balanced with our emotions (as showed in Damasio's work).
By the way, do you have any pointers to the experiments you just mentioned?
cheers
N
Will, as I define it is any conscious process that presages a decision. what I am saying is that consciousness comes late to the processing stream, much too late to presage a decision or be the cause of it. Thus the "Will" model of conscious causal link is broken.
The nondeterministic decision making is not conscious, which is what I have been trying to say.
Sorry to say, my mind doesn't lend itself to documentation of the pointers to the grist that goes into it's mill. I will do a search and see if I can rediscover the article involved.
So, in the end of the day, we were more aligned than we could think :-)
I'd be very interested in those pointer indeed. Thank you for trying to get them back.
N
Sorry no can find. Either my memory is playing tricks on me, or the article has been withdrawn. I did find a similar statement I had made previously about 7 minutes here on Researchgate but that is all.
Allright... anyway, if you ever come across them, you know where to post the link.
But now we have a problem... since memory does play tricks, we may end up basing our conclusions of false premises... which is bad...
Can't help that Roman, which is why I remain humble. My memory is often very good, just not detailed, if I remember it, it probably happened, but the details slip past me.
In any case, the point was only in support of a more general statement that will does not stand up to scrutiny if we assume that will means conscious choice. Notice I am not talking about "free will" but Will itself.
Agreed.
Just a small parenthesis... this seems to be a well known and stablished fact in the realm of martial arts, where the main rule is "never to think". Obviously, that's not meant to be something like "you should be stupid", but rather that conciouss thinking takes an invaluable time, specially when the price is death. So, one should never try to make a decision, for it's already been made...
N
Graeme,
> Some experiments claim that they can
> statistically determine the choice up to 7
> minutes before the decision point, and there
> is no conscious process associated with the
> early stages.
are you talking about the Libet experiments?
http://www.newscientist.com/article/dn17835-free-will-is-not-an-illusion-after-all.html
http://www.eurekalert.org/pub_releases/2008-04/m-udi041408.php
Related:
http://arxiv.org/abs/1310.3225
http://arxiv.org/abs/1202.0720
Regards,
Joachim
It's the second report all right, and I still have egg on my face because I got seconds and minutes mixed up.
Here's an interesting article on internal train of thought.
doi:10.1016/j.brainres.2011.03.072
sorry it's behind a paywall (cost me over $35.00)
here's some similar articles that cited the previous article or on similar subjects recommended by science direct
Brain Research volume 1428 [2012] pg 1-2
NeuroImage volume 69 [2013] pg 120-125
Trends in Cognitive Sciences volume 16, issue 12 [2012] pg 584-592
Neuron volume 76, Issue 4, [2012] pg 677-694
NeuroImage volume 61, issue 2, [2012] pg 437-449
Trends in Cognitive Sciences volume 15, issue 7, [2011] pg 319-326
Actually, if you look at the frontal parietal network plus the cerebellum as the implicit attention network, and the default network as the explicit attention network, the juxtaposition of the two defines complicit attention, so the article actually supports my contention that internal trains of thought (ie. thinking) is based on complicit attention.
The brain is very different from the digital computer. Just compare the parts and their connections: no similarity at all.
Computational functionalists maintain that, despite these obvious differences, there is a parallel at an abstract level. They claim that the brain is an information processor which can be simulated on a computer. Some go even further and believe that a computer with the right program would be conscious. I think John Searle's Chinese Room argument puts paid to these claims but, of course, computational functionalism could be vindicated by evidence.
Richard, Searls chinese room argument has most to say against rule bases "Understanding" anything, he has other arguments against Artificial Consciousness.
His most telling argument is that any attempt to implement consciousness on a computer will be a simulation, and a simulation is not the real thing. However, I have noted that consciousness is already a simulation of a pseudo-serial process on a massively parallel processor and a simulation on one machine is equivalent to the same simulation on another machine. No matter how differently they have to be implemented or what the technology is, under which they are implemented.
See "Virtual Machines" as an example of this.
The main problem in understanding the "Functions" of the mind, is in understanding what the parts do, and how they are connected. This is much more complicated than it seems because the parts are evolved to work together in an almost seamless manner so it is hard to see where one part leaves off and another begins. If we can ever get to the point where we understand the pieces, we will be able, (it might not be practical but we will be able) to implement a computer system that looks a lot more like the brain than any currently do.
As the architectures merge, the differences between a digital computer and the brain will become one of technologies rather than one of "No similarity at all".
That is not the problem this thread was created to solve.
The question this thread was created to solve, is, is there a computer like automaton implemented by the trains of thought that have recently been characterized. The trains of thought being pseudo-serial in nature.
That is a much more interesting argument, then the one over whether the brain is similar at all to a computer piecewise. I think we both can agree that currently it isn't.
Hi Graeme, i reject the computational functionailst theory: consciousness is not a computational phenomenon. Nor is it a simulation of anything - it is the reality of certain brain activity which we do not yet understand. Yes, the brain is massively parallel but that doesn't mean it's a processor: IMO, a massively parallel computer could not simulate brain activity and so produce consciousness.
Hi Richard, there are many people who have the same opinion as you do. I guess the deciding factor will be if someone designs a computer that can simulate brain activity or not. Unfortunately we are far away from a solution to the problem so your scepticism has some validity. Obviously we are two poles of the discussion and have nothing in common, I am not sure that there is any benefit in arguing the issue with you as a result. But to define what we can discuss, why do we need to define the brain as a processor? That is purely a theoretical consideration.
Richard,
I read your answer above, but I do not understand why you have such opinions like: "consciousness is not a computational phenomenon."
Could you please elaborate your opinion?
Graeme: indeed - i'll eat my words if the evidence is provided. Agreed: no point in arguing for the sake of it :) I should say that i do think that (non-conscious) AI is possible but even that is proving immensely difficult to program.
Wilfried: my view is that consciousness arises from devices such as the brain but that the brain works totally differently from a computer. IMO a computational device i.e. one that flips bits in order to perform computations has no possibility of being (conscious) whatsoever. I take the same view as Searle on consciousness.
Richard, flipping bits is a "Digital" computation mechanism, there are other mechanisms of computation that do not require bits at all. We may simply have to consider new forms of computation and step away from our addiction to "digital".
Alternately, digital computation is so basic, that we might be able to come up with a solution after we have included enough information about what the brain IS doing.
The real issue seems to be whether what the brain is doing is computational at any level, or not. We simply do not know, but indications are strong that if it is not computational, that it is at least multi-variate and thus can be simulated digitally.
I agree with you Graeme: we don't know what the brain is doing.
You can widen the term "computation" to include the kitchen sink ergo brains but where does that get you?
You have to be careful about the meaning of simulation. My opinion is that if you can simulate it digitally in any strong sense then it is not what the brain does. I can simulate all sorts of things on a computer but that is merely grinding numbers to reproduce what is measured about the system under study - this does not mean that the system has been instantiated. Consider molecular dynamics as an example.
Richard,
you are right. Simulation is not the "real thing" but we do not have alternatives to test our hypothesis. We hope to find the mechanisms if the behavior of a simulated system is like that (or similar to that) we observe. What does physicists do if they calculate their formulas? This is also a kind of simulation and not the "real thing".
The problem is even much deeper: We do not know what "the real thing" really (!) is ... Our brain simulates also the "reality" in our mind and scientists have their formulas. Nowhere is the "real thing" ...
Well, where is your problem now?
our brain is better than a computer because of its neuron accurate multifunctional and also it the brain who made a computer not a computer made a human brain
Richard, I think you are tending towards a metaphysical "Real Thing" that is unapproachable by science, considering that science has been so successful, even though this argument has been correctly applicable for centuries, I have to wonder if it actually makes a difference.
Mukesh, while I agree with everything you say, I am not sure it is significant that the brain made a computer. What I am asking about is a similarity in the function of the brain at a level that acts in some sense like a computer.
Grame,
your question is quite interesting. I wonder how a brain is able to act algorithmically even it is a NN which works totally different.
Wilfried,
If I am correct, the difference lies in the network dynamics of "Trains of thought" and these network dynamics create a pseudo-serial effect that acts like a computer command stream in that there is a stream of functions that deal with data. The train of thought implements an algorythmic interface to the massively parallel neural network.
Like some facilitating environments, the human brain also requires come suitable conditionality to work trouble free, conducive
Dear Prof. Graeme Smith,
Thanks for your valuable question and the same time thanks to all respondent for interesting answers.
I am interact with the answers @Joachim Pimiskern, @ Richard J Wilson, @ Wilfried Musterle and @ A. Mosavi
Graeme,
It seems that brains and computers do not function the same way. This is a huge understatement. In fact they do not function the same way AT ALL:
1) hardware, software
Brain: the hardware is also the "software" (see point 2 below)
Computer: hardware and software are distinct.
2) learning
Brain: gradual, and results in changes in the "hardware"
Computer: all or none, and results in changes in the "software"
3) performance on a problem the solution of which was not learned
Brain: somewhere between 0 and 100% correct
Computer: 0% by definition (see point 2, learning)
4) post-lesioning performance on an already mastered task/problem
Brain: "graceful degradation" in performance (i. e., not of 100%, but not of 0% either, somewhere in between)
Computer: 0% correct if the data needed to solve the problem were affected, 100% otherwise
Also, the computer metaphor is that there are two different sort of things:
- symbols that stand for the things in the real word (those symbols represent those things)
- syntactic rules (the "software") that tell the computer how to operate (i.e., deal with) on the symbols in order to achieve a goal (i.e., carry out a task)
Now, we know how the symbols are grounded for a computer: the human programmer decides what each symbol stands for.
The good old fashioned artificial intelligence (GOFAI henceforth) take on human cognition is that human cognition works just as a computer do (hence your question, I guess). Numerous problems arise within this view. For instance, the symbol grounding problem (see Harnad, 1990). We've seen how symbols are "grounded" (i.e., from where they take their meaning) for computers. The answer was, well, the programmer assigns a meaning to each symbol (s)he uses. Then in the case of the GOFAI view how are the symbols that we supposedly have in our head grounded? There is no satisfactory answer to this question (unless you go with "God put them there").
Check out also JR Searle's Chinese room argument:
John R. Searle (1980) Minds, brains, and programs. BBS 3:417–457.
Cheers,
Serban
Dear Serban, I think you have mistaken me for an advocate of GOFAI, instead I fully agree with most of your comments about the brain being different than a computer, but that is the Neural Network Level that is so different, what I am talking about is the trains of thought level where the differences are not quite so striking. That this level is based on the neural network level, makes for an interesting juxtaposition where we have to accept completely different limits for the base of the system and the apparent system that falls above it.
The symbols are grounded in meanings which are in turn the result of processing in the implicit memory areas, not in the symbolic areas.
We should not discuss the different constructions of computers and brains. Technical and biological solutions of the problem 'how to understand the world' produces by necessity different hardware. That is not the problem we like to discuss.
What kind of information processing structure do we need to make the world significant and meaningful?
We need not only a brain or a computer, we need also a body and its needs.
The question of what consciousness is or how it works is extremely difficult. Maybe it would be easier to study the genesis of consciousness.
Young children develop an individual consciousness, a personal knowledge of how the world works, what meaning things have and what actions are possible and necessary to achieve goals. This is an attempt to fathom consciousness bottom-up.
The way meaning of the world (objects or events or situations -- the world ) is constituted for a human brain/mind is completely different from the way it is done by a computer. The former involves activation of experience at sensory-motor cortices and affective centres of the brain. Meaning of, say, anger is an 'experience ' for a human cognitive system ... whereas for a computer it is
constituted by specific 'information' which produces some specific behaviour/s (and is result of some preceding behaviours or other mental states) without the role of any kind of experience. It is merely guided by an abstract algorithm of a software program (some complex rules) with no role of body or environment. No role of purpose , valuation or context --- which are features or attributes of a consciousness as a pre-requisite. Software is just a blind principle and doesn't need any consciousness to have above mentioned features (value, purpose, experience, etc) .
In my view, humanity has steered towards a wrong direction by accepting computer metaphor as a dominant conceptual system to think about human mind. It's simply a wrong model... and even some psychologists, cognitive neuroscientist are infected by it as is reflected by their terms 'information' in some brain centres and 'computations' over abstract information ... Brain doesn't need to work in these terms... Neurons are not 'copper wires' to merely enable flow of electricity and not having their own chemistry ... they do have a deep chemistry down to the molecular level and those processes have a say in cognitive functioning or in generation or emergence of experience and consciousness itself.
We need to study evolutionary origins of life in simplest microorganisms ...and how it evolved to the level of human consciousness by studying variations in genes with changes on environment. Also we need to study developmental changes in the infantile mind when it interacts with environment (of both physical and social kinds) and gradually acquires cognitive abilities through such interactions, which are not given to it full fledged as innate gift.
In other words, I am saying computational , representational model of mind (CRMM) is simply misleading if one wants to understand the nature and working of genuine HUMAN mind.
CRMM or classical computationalism (CC) is description of how a robot can function 'intelligently ' ...so it might be an alternate kind of intelligence , but not HUMAN intelligence.
Further, it is based on the assumed definition of intelligence as "problem solving" . In human context, the definition is limited , if not wrong.
For someone interested in HUMAN cognitive science, CRUM or CC or Functionalism ( disregarding the subtle differences they might have) are simply wrong headed, and waste of time. Human Mind doesn't work in that way...!
The short answer to the question, "is the brain like a computer" is no. Just look at it then compare with a computer: are they not very different? A computer is a switching device - it flips bits, neurons do no such thing. There is, of course, the rather tired and weak analogy which computational functionalists mistakenly take as gospel but the brain is a totally different system from any digital computer.