Some examples of possible intersections I'm interested in: the tractability of automating different tasks, implications of the complexity of the brain for comparing the computational complexity of different domains of human intelligence (ex. vision, action, abstract reasoning, etc.).
ciao, I don't - very, very interesting !!! please let me know if you find good stuff !
I suggest to look for paper/stuff/etc of F. Mussa-Ivaldi, Bizzi, Karniel and Sanguineti/Morasso; Wolpert (Daniel-Imperiala College) is another good one !
For sure there is people working on that becaus of bio-mimetic robots (sess Sandini at www.iit.it and the baby Bot project)
Hope this helps, ciao, emanuele
The classical textbook of Norvig does. It even has chapters on AI philosophy.
I suggest, if you haven't already, reading about a CS methodology called ANN, artificial neural networks.
But,,what do you mean when you say, "comparing the computational complexity of different domains of human intelligence"
Try CPPN, it is a recent research by Kenneth O. Stanley (Well, not so recent) They differ from standard neural networks by finding solutions on higher dimensional space.
Thanks, everyone! This is all very helpful feedback and gives me a lot to look into. Just to clarify what I meant with the original question....I'm curious what we know and don't know about the amount/type of computation involved in the brain in order to perform different tasks which we would consider intelligent. For example, based on (admittedly very preliminary!!) research, it seems to me that a major trend of the last few decades is towards realizing that things we think of as "easy" such as vision are actually much more complex than we originally thought, since we're just naturally good at it, whereas things like playing chess are tractable problems with enough processing power. I'm curious to what extent we have a clear picture of how different tasks compare to one another in terms of the computation required within the brain (or the amount of computation that has so far been required in AI applications to recreate those tasks)....perhaps this would be helpful in thinking through what sorts of tasks may or may not be easily automated anytime soon, and how to think more clearly about what is easy, hard, etc. Hope that helps clarify what I'm getting at.
*And since I don't have a good understanding of computational complexity theory at this stage, my answer to Kees's question is - I'm curious about all of those! I have a lot to learn and am interested in whatever analytical approaches will help me answer the above questions re: what is "easy"/"hard" to automate, if it even makes sense to ask such a question in so simple of terms...ultimately I'm interested in getting involved to some extent in actual AI research but also researching the social implications of AI/robotics, which would seem to benefit from having a clearer view of what is possible in the foreseeable future.
One way of determining the computational complexity of a particular type of processing within the brain is simply to count the number of neurons it takes to do the processing. The more neurons there are, the more processing needs to go on.
This simplistic mapping of complexity, is confounded by the fact that the processing complexity is not directly responsive to the complexity of the task, but must include redundancy, and degenerate coding, and uncertainty aspects as well, thus the expansion of the processing capacity required could be one of a number of factors not just the combinational complexity of the task. 2/3 of the brain is taken up with processing perception much of this processing power has to do simply with the fact that the processes have a high uncertainty, high redundancy, and high degeneracy each expanding the size of the processor needed. However, relative size of the modal areas seems to suggest that processing vision is both more important and more sophisticated than processing sound, and that Vision and sound are each significantly more important than processing taste, and smell. It is not as obvious that processing internal signals is almost as important as vision, but that is because the parietal lobe while large, is hidden at the top of the skull and is not easily evaluated from the usual side view.
Graeme - thanks for the helpful comments! I have some follow up questions - insofar as there is *some* correlation between the complexity of the task and the number neurons involved (though the relationship is confounded by the factors you mentioned), is there a good rule of thumb/heuristic for estimating the processing power required on modern computers to simulate a brain activity? The reason I ask is that I have heard estimates of the computational power of the human brain, but I'm not clear to what extent those estimates take into account sub-neural processing. From what I've heard, neurons and synapses cannot be modeled in terms of something simple like, say, on/off a certain number of times per second, but instead have much more complex relationships, and there are many different types...is there a credible/authoritative estimate of the order of magnitude of computation involved in the brain at all levels somewhere? That would be helpful for my purposes, I think.
It would be interesting if there were data comparing the computation involved in different domains of what we consider intelligence, and how those estimates map onto the processing power required today to do those tasks. I heard somewhere that at the energy efficiency levels of today's computers, it would require something absurdly large like terawatts of energy to do the same level of computation, taking into account sub-neural computation, but don't know where I heard that/what it was based on.
Has anyone read/does anyone recommend Biophysics of Computation: Information Processing in Single Neurons by Christof Koch? Seems potentially relevant to answering some of the above questions/getting a better feel for the complexity involved...
Thanks to everyone for your help...
Miles
Experimental data show patterns during action potential generation [2] which point to a sub-cellular level to process information. The paper relates [1] the general failure to build intelligent thinking machines to reductionist models and suggests a change in paradigm. With the main focus on temporal patterns Koch and many other neuroscientists have completely misled regarding computational power of neurons.
Few points:
• The transient electrical activity and temporal patterns cannot ‘hold’ fragments of information ;
• Our memories need a more stable, non-volatile support at a molecular level (e.g. proteins).
• Simple cells have specialized into neurons, are densely packed and generate electric events at different scales to integrate information in the brain.
The digital approximation of action potentials has completely changed the nature of neural computation --wrong direction .
The semantics are hidden in spatial modulation of action potentials [4] which show that through electrical interaction [3] semantics are built and integrated into the cognitive level
[1] From Neuroelectrodynamics to Thinking Machines, DOI: 10.1007/s12559-011-9106-3, Cognitive Computation, 2011, http://www.springerlink.com/content/x1l7388475323758/
[2] Building Spike Representation in Tetrodes, Journal of Neuroscience Methods Volume 157, Issue 2 , 30 October 2006, Pages 364-373 , http://dx.doi.org./10.1016/j.jneumeth.2006.05.003
[3]Computing by physical interaction in neurons, Journal of integrative Neuroscience, vol. 10, Issue: 4, 2011, pp. 413-422
http://www.ncbi.nlm.nih.gov/pubmed/22262533
[4]A Comparative Analysis of Integrating Visual Information in Local Neuronal Ensembles, Journal of Neuroscience Methods, 2012 Volume 207, Issue 1, 30 2012, Pages 23–30 http://www.sciencedirect.com/science/article/pii/S0165027012001021
[5]Aur, and M Jog, Reading the Neural Code: What do Spikes Mean for Behavior?. Available from Nature Precedings
Thanks so much for the input, Dorian! This is extremely helpful and I look forward to reading the papers you mentioned.
Best,
Miles
Part of the problem, with translating neural function into processing cycles, is that we really don't know what portion of the signal is important. It is so easy to assume that the spikes are the most important aspect of a neurons output, because they are what is most obvious after clamping. However many hints exist that the neurons do not care as much about the spiking, as the general gain in charge, and multiple entry networks, suggest that the processing of any one synapse is less important than you might think. What is important is not immediately obvious and so, there is no direct conversion between neural synapses and clock cycles in a computer. Further the whole structure of the memory is different involving as it does clusters of neurons acting together, rather than discrete memory components such as bits and bytes. Because of this, most of the predictive mappings between the brain and the computer system are simply misleading. It is not that we cannot predict that at some point there will be congruence, but that all existing predictions are essentially based on poor assumptions about the nature of the processing done in the brain.
Having said that, Professor NG has designed robots based on sparse vector machines, that can functionally exceed the performance of the human brain/body, in control applications. Therefore it is tempting to assume that what he has done is equivalent to what the brain would be doing in the same job. Unfortunately, the brain can do the same type of activity across a wide range of applications where the robots are specialized in such a way as to be limited to a single application. In all probability the vectors are too sparse to capture the full breadth of the data needed for the wider field of applications.
Thanks again for the input, Graeme! Are there any specific publications you would recommend that speak to those issues (or which would help me develop a good analytical framework for thinking about such issues)?
Best,
Miles
Just to add, the 2 areas, automation, and AI are not the same from the computer science perspective. Example: I can automate the building of a car at a factory by telling the machinery where each part goes for each type of car, but I cannot tell a machine how to drive simply because driving, as we see it, is an interaction with other "agents". The location of a part on a car for an individual model will not change, but the introduction of reasoning to your application takes a system from an automation system to an AI system. The complexity of an automation system is straightforwardly calculable because it does not change through iteration. The complexity of an AI system is not because reasoning changes with time.
Graeme,
In response to your previous statement, the question is not how to translate neural function into processing cycles but how to uniquely map input space to output space in order to simulate brain function. We use the idea of neurons as inspiration in biologically inspired algorithms such as ANN and boosted learning methods. As a computer scientist we are not concerned with the underlying electrical signal that carries out the computational processing, but how input events are used to classify outcomes using dynamic variables and biases to effect each artificial neuron.
It is true that computation of single processes already far surpasses our human ability. The reason that the sparse vector machines are single task efficient is because this algorithm does not allow for multiple outcomes. SVM's merely chooses outcome A or outcome B based on bayes/linear classification schema, or in the case of the the sparse SVM, linear classification based on minimum error using another probabilistic measure such as expectation maximization. It is possible to adapt this algorithm for a multi-class/multi-outcome case if a data structure known as a decision tree is used to combine multiple derivations of the SVM. The problem with adaptation of AI to a wider breadth of applications is the overlap in computational class structure. This is the so called wall the industry faces.
Also, Kees, could you please send me the PDF of the article you mentioned?
Best,
Miles
Robert, I agree that is the problem that needs to be dealt with in A.I., but the problem of predicting the number of clock cycles that is equivalent to a number of neurons, so that time frames can be predicted, is another question altogether.
Miles is asking about relative measures of complexity between A.I. and biological intelligences, which falls more towards the later question than the former.
On the SVN, I have long been concerned that the bayesian based theory, has fallen into error, by being too selective. By doing away with the redundancy, and uncertainty too early, information is lost that would allow more flexible responses. Instead of pattern matching as an algorithmic point, I have suggested a satisfycing approach that offers some redundancy and uncertainty, which can then be mined for further information by later processing stages. I was meerly pointing out to Miles that there was reason for doing so.
Kees,
As you can probably tell, I am more a scruffy than a neat.
But that is not the whole reason why, I am more interested in uncertainty, than in mathematical rigor, and focus. It is my belief that the world is a complex, and uncertain place, and that far from being reduce-able to neat formulae, it presents a difficult slippery interface to the material world that cannot be dealt with wholly by predictable relationships, and thus neat and elegant solutions. Trying to find a solution to the wider function, I cannot support tools inadequate to the job, as being the best solutions.
While soft and heuristic techniques have the reputation of not always finding the optimal solution, they often have solutions that are "Good Enough" for the role that needs to be played, and here-in lies the difference between a neat solution and a scruffy solution, it is not that scruffies are willing not to solve the problem sometimes, but that they are willing not to require the optimal solution to the problem. The halting problem, indicates that even with a neat solution some portion of the problems will fail to complete in a reasonable amount of time even if they are neat. But with a scruffy solution, you get an intermediate answer anyway, even if the optimal answer was not reached. If the scruffy solution is "Good-Enough" it doesn't matter that it was not fully optimal, you can get by with it.
With a good-enough solution presented earlier than possible with a neat solution, you can go on to work on more important questions, with some chance of completion of a larger set of variables. If you waited for the neat solution it might not ever arrive, and you would still have to work on the more important questions after its late arrival.
In other words, in an uncertain world, neatness doesn't always count.
Graeme,
Just to clarify, my intention is not to start an argument with you (of course I enjoy a good debate, lol) but I feel like your problem question,
"but the problem of predicting the number of clock cycles that is equivalent to a number of neurons"
is a bit misleading. Equating neurons to clock cycles is like equating lightning to cloud formation. A computers performance or an algorithms performance is not concerned with the clock cycle count or clock speed of a cpu. So if the goal is to look at relative complexity of biological intelligence to AI, then I would look at current implementations of AI algorithms as AI is not defined by hardware, nor is a cpu in a computer remotely intelligent.
Of course your experience with A.I. might be completely different than mine, and you may be well familiar with non-computerized A.I. while I am still struggling to achieve it. I think that we are amazingly close in attitude towards cpu's in general (They are dumber than a sack of hammers) but since the halting problem is a boundary to processing using CPUs, clock cycles are a quick and dirty measure of processing capacity. Whether we measure them in terraflops or not.
Much of the predictions on how soon we will get a computer that can compete with the human brain for processing power depend on just such measurements the brain is estimated to operate at so any terraflops, and we will if we keep following current integration trends be able to achieve that in X number of years with a super-computer.
By asking the relative complexity of the processing being done by a A.I. versus that being done by a brain, we move one step removed from the actual clock cycle comparison.
If we could for instance claim that the brain is working with information that is 5 times as complex as the computer doing the same job, then we would expect that we would need a supercomputer 5 times as complex as the estimated brain size to process the same information.
Alternately if we could show that the brain used 5 times the information to get the exact same effect, we could suggest that the complexity of the computer needed to achieve the same effect would be 1/5th of the estimated size of the brain, something I think is more to the point. (The brain having to operate within the limitations of biological systems, and thus having extra processing needed to achieve the same base function.)
Without a standard against which to measure such as Tononi's IIT, there is no way to compare biological and silicon intelligences. Tononi himself has suggested that there is no way to evaluate complex systems with his measure but we can do analysis on simple systems such as neurons, and clock cycles.
Of course Tononi based his IIT theory on bits, and it is a bit difficult to analyze neurons in terms of bits, but for any simulation we choose, there is an equivalent information theory treatment that will translate that implementation into a bit-equivalent. The problem is simply that simulations of neural networks are so far below the actual complexity of the mammalian neuron that there is little help to be found in predicting mammalian intelligence using simulated neurons.
Abdessamad Mouzoune
Exactly the point I was trying to make. Just keep in mind that algorithms use metrics such as big-O and space complexity because they are straightforwardly comparable as relative measurements, not because they are the best ways to evaluate algorithms. Big-O is a worst case upper bound and is usually a good relative metric.
1 simple mathematical operation is typically 3 instructions. 1 instruction does not equal 1 clock cycle. Different instructions are different time lengths. You can process 1 instruction per clock cycle or 300 threads per clock cycle. Hence, clock cycles are not a measurement of complexity or performance.
"The problem is simply that simulations of neural networks are so far below the actual complexity of the mammalian neuron"
- For a single neuron I disagree. The problem is that there are so many neurons interacting with each that drastically escalates its complexity. Each node in a perceptron is not meant to equate to a single neuron.
Please note that I was not talking about any specific neural simulations but about simulations of neurons in general, I am quite interested in how we can get a better model of neural function, but synapses and soma's (perceptrons) won't cut it now that we know about the chemical pathways within the cell.
I note that if you take clock cycles too literally, it is a problem. What with parallelism and multiple cores, the streaming number of instructions that can be done on a single physical clock cycle may seem quite high, and there can be a significant number of clock cycles per instruction or per operation but this type of thing simply means we are working with polynomial time rather than non-polynomial time. For any particular technology, the processing speed, is directly linked to the clock speed, although translating it may require a polynomial. It is however only this last generation of micro computers that has broken free of the actual clock rate so far as to seem to run counter to it. Perhaps it should be clockcycles per core since the number of cycles per operation is within an order of magnitude usually.
Book waiting to be written: Symbolic representation of an object of interest of the subject: the stimulus/response model taken to examination of stages of intensities, or, how the stimulus of the perceived symbol saturates a corresponding/ primed or un-primed neural complex. Through brain imaging, very general areas of the brain can be lit up -- with more specificity, that is, what would the brain image look like for a specific target symbol? If these are keyed to a system of multi-dimensional mapping, this would change how we acquire knowledge. Zooming in and out of the neural map. My point is that this is all headed towards a unified theory of virtuality. Publications?
Hi Miles,
you say that you are "curious what we know and don't know about the amount/type of computation involved in the brain in order to perform different tasks which we would consider intelligent." Thus, I suggest "Neuroengineering the future" from Bruce F. Katz.
Jonathon, you make an interesting observation, what would happen if we took the MEG output and translated it using a Self-Organizing Map?
Forgive, Graeme, are you teasing me? I'm a bit of an armchair general here -- following the discussion without the benefit of all the literature -- what you say sounds like no body thought about MEG / SOM, and thus, MEG/SOM sounds plausible. If clinical use of MEG is a so-so measure of reliability of the data (inverse problem), using the fuzziness of density estimation is the kind of smash up of elements that yields something out side the box. Sounds like some kind of start. This is like trying to describe the sound of my thinking -- does my mind "speak" in a "voice" that if I try hard enough I can hear? Are my imaginings "image-able"? For targets, I'm thinking along the lines of how some archetypal symbols are species-specific, like the shape of a honey comb cell, or the horizontal--vertical axis. These don't seem to be taught or learned as much as generated from within the "nature" of the living tissue. New field? Has this been tried?
How does a honey bee subjectify/objectify the shape of the octagonal cell, "crafting" it from a source from within it's living tissue. So, we humans know positional orientation subjectively, and craft representations (maps, etc,) of this from a source from within the living tissue. (Instinctual) It stands to reason that this process should be measurable through brain mapping if a subjective target image is generated by a target-specific objective stimulus. An effective target would be something that we label "symbol", particularly a species-specific archetypal symbol.
Jonathon, AFAIK I have never heard of a MEG/SOM implementation, but I too am a bit of a theorist rather than down in the trenches digging foxholes.
Jonathon, In the hippocampus as action center model, the ability to map action onto a locational matrix and thus to orient in a hexagonal manner might come from the entorhinal cortex links which in mammals are thought to create a triangular matrix of reference lines used for mapping. Each layer of Entorhinal cortex having a slightly different orientation, simply building the wall of the cell in the honeycomb to different Entorhinal Cortex layers would result in a different angled wall.
Graeme Smith - can you please elaborate or send some references about your comments on the entorhinal cortex? You would be helping me out a lot if you could do that. Thanks.
A study of how the comb cell is constructed? Initiation, sequence, completion, behavior patterns, biochem markers...does the bee conceive of the whole structure, or only of it's part? When a human experiences constructing a concept, the language is marked by "con", or "with", implying this is not an activity conducted in isolation, but in dynamic relation "with"..."con" representing an abstract artifact that is probably species-specific. My reasoning here, represented by symbolic written discourse is itself a marker for the process.
Sorry Shady, it is not my own work, just my interpretation of someone elses. I have long since lost references to that study. I believe the work was done in rodent mazes and was the base work behind the discovery of "Place Cells" but it is my own extension of that to the system before the declarative/episodal memory that links it to insects.
Thanks Graeme - I am familiar with grid cells and place cells in the hippocampus but your eloquent description of it suggested that you have a more extensive familiarity. Let me know if one of those long lost references ever shows up haha. I'll look through my hardcopy papers when I get home and see if I can find a copy of a good review.
Any familiarity I have is with the concept of how the declarative memory might work. To get there I have to understand what all these things that they are finding do, and do a trial assembly to see if they can work together to do what we think they do.
Just to keep the juices flowing: I'm thinking of the concept of "plastic", as in "plastic imagery" that the film maker Sergei Eisenstein termed his use of abstract imagery "given" to an audience (camera pan to the clouds during sex scene): he realized the mind would make use of the "plastic" nature of the abstract image -- suggestivity -- transfering a complex of symbolic, sensory stimuli to primed "matrix", another example of how a species conceptualization process is partially the result of interaction of stored memory (as a primed place to put new stimuli) and imagination. I have often referred to immature minds as "wet cement", specifically, there must be a "reciprocal to suggestion" vis a vis environment mind. At the cellular level, identification of specific patterns in tissue that is primed by previous stimuli to accept "relevant" external stimuli...is it possible to begin to identify and recognize such patterns in entorhinial tissue (target), then observe any changes in response to specific stimulus of something species specific primal "symbolic" (arrow). Would recent work in Alzheimer's might suggest some of the chemistry of this. I met a man who was stuck in a doorway -- he could not remember what a doorway was for.
Graeme - your theories sound interesting. Is there a document anywhere summarizing your views?
Best,
Miles
Jonathon, think of the Entorhinal Cortex as being the mapping grid generator for a number of different perspectives on the same map. It is likely the parahippocampus that will change with specific stimulus, and that should operate a bit like the Mushroom Bodies in fruit flies. In all probability one mushroom body to one conceptual division. (We do not yet know that this is the case).
Miles, My early work is available at Wikiversity at:
http://en.wikiversity.org/wiki/User:Graeme_E._Smith
My latest work is available here on Researchgate in the topic Artificial Consciousness under the question "Can consciousness be traced back to the declarative memory areas in the hippocampus and parahippocampus?".
Yes! I think this is the definitive guide:
Jerome Feldman, From Molecule to Metaphor.
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=11463
So, possibly, "thought" might be mapped -- catalyst sequence in combination with con-sequence -- in the same manner as chromosome chains
or try http://www.springerlink.com/content/6204615622770564/ where present an approach to determining the cognitive complexity of justifications (logical theories in a decidable fragment of first order logic whose computation complexity is well-studied) for entailments of OWL ontologies. We introduce a simple cognitive complexity model and present the results of validating that model via experiments involving OWL users.
Regarding computational complexity and AI (especially learning), you might wanna check out:
1) the Probably Approximately Correct (PAC) learning framework of Leslie Valiant: http://en.wikipedia.org/wiki/Probably_approximately_correct_learning
2) the work of Marcus Hutter on Universal Artificial Intelligence: http://www.springer.com/computer/ai/book/978-3-540-22139-5
3) the PhD thesis of Shane Legg, on machine super intelligence: http://www.vetta.org/documents/Machine_Super_Intelligence.pdf
4) the PhD thesis of Lihong Li, on computational reinforcement learning theory: http://www.cs.rutgers.edu/~mlittman/papers/other/Li09Unifying.pdf
Hi,
I can highly recommend this one:
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1002236
Where we address the relation between computational capability (fitness) and computational complexity (integrated information).
And this one:
http://arxiv.org/abs/1206.5771
where we show how you can evolve agents to have representations about their world, the very substrate of rationality. It also has a good chapter about various complexity measures like predictive information and so forth,
Cheers Arend
Hi, looking at the answers you got, mostly old stuff! If you want to connect to real new science research, you will have to leave many old assumptions in classic logic and math, commputer science and traditional neuroscience. Make a restart to your knowledge and step forwards to quantum world. And than I recommend: "Consciousness and the Universe: Quantum Physics, Evolution, Brain & Mind [Hardcover]".
http://www.amazon.com/s?ie=UTF8&keywords=Stuart%20Hameroff&rh=n%3A283155%2Ck%3AStuart%20Hameroff&page=1