My PhD thesis concerning computational brainstem models is almost complete. It seems role of brainstem within human nervous system is still underestimated.
@Andrew Computational models seem be important tool allowing for deeper understanding of neural corralates of various nervous system functions (on various levels, from single membrane to behaviour). Most of them concern cortical areas. There is still small number of computational models of brainstem. Are they too difficult, not popular, or something else?
Computational neuroscience is extremely limited in its ability to explain much of anything. From the neurodynamics of pain to image classification computational models are for the most part incapable of answering basic questions. In fact, the nature of the neural code remains a debated issue within computational neuroscience, and as long as we cannot determine definitively what the "code" used by neuronal networks is, how can we possibly hope to model so fundamental a component of human physiology as the brainstem? Also, computational neuroscience focuses more on "learning" (in the mathematical/computational intelligence paradigm sense of the word) than the kind of functions the brainstem is integral for.
Every attempt to understand basic mechanisms underlying brainstem functioning e.g. on system level seems be very important. As you mentioned its real mechnisms may be very complicated, moreover direct simulation is beyond our reach, so simplification (signals, structeres, mechanisms) is neccessity. Such results may be basement both for more detailed models (with many hipostheses, i am afraid), and further experimental studies. Mentioned "learning" concerns rather higher levels: cortical and subsortical, while transmission/processing information within brainstem may be simplier, on basic level associated rather with controlled switching (see ARAS role and roles of various brainstem nuclei within subsystems, where brainstem plays significant or key role). Structure of brainstem is diverse itself (nucleis, paths, networks insted of neural network as in cortical areas), and this may imply distinct approach in modelling.
is very difficult to acquire and we need a lot of time to explain it.
Thus the question is: should we take into consideration that brainstem may be as important as subcortical and aubcortical areas, and conduct separate research on it, or use only integrative approach where brainstem may be one of CNS elements. If such integrative approach may omit important issues concernig brainstem, or not?
Take into consideration e.g. following: we know brainstem strokes are very severe, causing e.g. DoCs, their neurorehabilitation is key but difficult, and we do not know if any neuroplastical changes with brainstem are possible due to its distinct structure. So where the potential to neurorehabilitation/recovery of brainstem injuries may lay?
In general, models use concepts/elements which are not directly involved with the object of study. Take a look to the Hodgkin and Huxley neuron-model. We know that neurons are much more than a simple circuit consisting of conductances and voltage sources in parallel, but this abstraction expanded our knowledge about neural dynamics and the generation of action potentials. Summarizing, introduce a problem in different terms than it is originally defined, is part of an interdisciplinary approach, required to analyze multidimentional phenomena.
Andrew, models do have limits... and that's why we use them. Reality is too complicated to study it, so we need a first approach, an opportunity window to give some explanation (even if this is hypothetical) to what is happening. There are, of course, different levels of description, but we need to incorporate sufficient details to account for complex dynamics and reduce this complexity to the essential characteristics to make a tractable model.
"One of the touchstones of the early 1980s was the integration of computational theory with empirical data (e.g., Marr 1982). For the most part, this promise has not come to pass. The wealth of data that has been accrued is almost overwhelming in its complexity, and there are few, if any, overarching models of the complete recognition process. Conversely, of the many computational theories that have been developed over the past two decades, few, if any, are strongly grounded in what we currently know about the functional and neural underpinnings of the primate visual system."
Peissig, J. J., & Tarr, M. J. (2007). Visual object recognition: do we know more now than we did 20 years ago?. Annu. Rev. Psychol., 58, 75-96.
Evaluations of computational neuroscience in terms of progress made has become more frequent and the above is hardly the first use of 20 years since the 21st century. Last year saw the publication of Bower, J. M. (ed.) (2013). 20 Years of Computational Neuroscience (Springer Series in Computational Neuroscience Vol. 9). Springer. Looking at computational visual neuroscience again, we find "Scientists by their nature are eager to test hypotheses or to tell a story about how a given set of facts or findings fit together and explain perceptual phenomena. But as we have seen, vision presents us with deep computational problems, and nervous systems confront us with stunning complexity. Most of the hypotheses we test and the stories we tell are far too simple minded by comparison, and ultimately they turn out to be wrong. Worse yet, they can be misleading and stifling because they encourage one to look at the data through a narrow lens." (from Olshausen. B. A.'s contribution "20 Years of Learning About Vision: Questions Answered, Questions Unanswered, and Questions Not Yet Asked"). Although the issue of rate vs. temporal coding is no longer the hotly debated topic it once was, what the "neural code" IS remains a partial enigma, while questions about nearly zero-lag synchronization among cortical networks that do not share the same input are as unanswered as they were first discovered some 10+ years ago. Recent work in pain research has highlighted the complexity between and within networks before signals get to the dorsal horn, let alone the brainstem. The critical roles played by "dendritic computations" have a history of study going back over a century yet are in most ways just beginning to be explored (see e.g., Cuntz, H., Remme, M. W., & Torben-Nielsen, B. (eds.) (2014). The Computing Dendrite: From Structure to Function (Springer Series in Computational Neuroscience Vol. 11). Springer. for a review).
Computational models often make little reference to empirically-based foundations beyond that of Hodgkin & Huxley's (1952) work with giant squid axons and continue in the spirit of (and rely heavily on) computational models like that of McCulloch and Pitts' (1943). The unfortunate fact is that a founding principle for much of cognitive science (including the neurosciences) was the irrelevancy of "hardware" and the primacy of the algorithm. Thus computational models continue to be evaluated based upon what they can do, rather than biological plausibility. As a result, most of the developments within the cognitive sciences related to computational modelling have found more utility in machine learning & soft computing than they have in empirically-based theoretical frameworks of neuronal functioning.
This is not to say we have not made tremendous strides. It is simply that one of those strides was the realization that, despite the practically annual predictions since the 1940s that strong AI was "just around the corner", we have learned more about how much farther we are from questions we thought we were close to than we have learned answers. A central issue today is the connection between computational models and higher cognitive functioning. Neuroimaging, especially the currently most popular and frequently used method (fMRI) examines neuronal function almost entirely at a level beyond any computational model. It wasn't that long ago that "Jennifer Aniston" neurons were discovered because the computational, reductionist approach begat in the 50s and 60s was still being used despite its fundamental inadequacy. An additional problem has been the ever-increasingly complex number of fields involved in questions regarding neuronal models and functions. Time was computer scientists, psychologists, and linguists made up the bulk of the cognitive scientists. Today, one finds computational neuroscience involved in everything from cognitive engineering and human-computer interaction to neurology and psychiatry. While older computational modelling methods, thanks to their generic nature, could (and, usually unfortunately, still) be applied to neural correlates of cognitive-perceptual processes, lower-level brain function presented little such analogous opportunities. One might well ask why the spinal cord did not receive the attention within computational neurosciences cortical networks did. The reasons range from the simple fact that one can get grant money for answering questions about the neural basis of political orientation despite a lack of solid theoretical or empirical bases much easier than one can for the study of the brainstem to the fact that the roots of the cognitive science (and neuroscience) lie far, far more in research, models, and theories about COGNITION, behavior, and higher level functions than in what is about as close to the peripheral nervous system as one can get without leaving the brain.
I'm not arguing this is a good thing. Far from it. Too often we seem to have skipped over the answer to fairly fundamental questions while assuming that we could develop cohesive theories of far greater complexity using a combination of ~50 year-old biological neuronal models and theories of the brain that pre-date most neuroimaging (not to mention epigenetics, systems sciences, and the necessary incorporation of neurologists & other psychologists and medical doctors within the cognitive & neurosciences). More time should be spent understanding how brain structures like the brainstem work instead of identifying neural correlates of some abstract dimension like political orientation or intelligence that is 1) to an unknown degree an "artificial" construction and 2) lacks any computational models that are biologically plausible (or even nearly so). It's just that the reason for a relative scarcity of computational models of the brainstem can be understood in terms of the general lack of computational models that are biologically based; the time take to get from the death-grip the algorithmic, "hardware doesn't matter", functional rather than empirical evaluation of models from classical cognitive science; and the relative interest in questions about higher-level processes whether we have the capacity to explain them or not.
This may at first look like very 'left of field' but this question interests me as well. We have developed a new form of illusionary space that's based on perceptual structure and not the fundamentals of optics. www.pacentre.org
We term this Vision-Space as opposed to picture space. We generate this perceptual structure (as a biological system) and its this that gives rise to the data structures appearing within phenomenal field (experiential vision). The point is this, the primary data-set of Vision-Space is a field and its our ability to generate this that establishes our implicit form of spatial awareness (proximity cues - not depth) apparent as 'peripheral vision'. This data potential has to be enfolded within the light array and we are looking for the neural correlations for this. My suggestion is that decoherence is occurring at the retina and feeding the dorsal pathway with the implicit spatial data potential. This would require dark light, passive absorption at the retina, correlated firing and discharge to the superior colliculus (retinocollicular) and processes akin to synthesis as opposed to differentiation taking place. In the SC spatial visual data (not a 2D map) would then be combined with spatial sound. The suggestion would be that its a field potential that's responsible for multi sense integration and that it takes place in these older areas (in terms of evolution) of the brain.
Vision-Space (we have developed a software tool) models visual awareness and I think it 'illustrates' aspects of what the contribution from these areas makes to perception. If we know what it's contribution looks like and what its attributes are we should be able to understand how its generated?
Retinal Receptor Functions
http://youtu.be/XzA7zirZK7s
Retinal processing and the research proposition
http://youtu.be/1ZMUPt6Oz_s
Primary spatial awareness direct from the light array?
http://youtu.be/8wUT0HGNSww
Retinal processing article in the library section of PAC http://www.pacentre.org/era/index.php
@Eric Role of brainstem in pain perception is complicated - if you need I can send you privately my own pictures (not published yet). There are important works by Tracey aand Dickenson SnapShot: Pain perception. Cell 2012;148(6):1308, Tracey and Mantyh The cerebral signature for pain perception and its modulation. Neuron 2007;55(3):377-91, Peyron and Faillenot Functional brain mapping in pain perception. Med Sci 2011;27(1):82-7 (in French only, I'm afraid), or by Chen Pain perception and its genesis in the human brain.Sheng Li Xue Bao 2008;60(5):677-85.
Newest work decreasing role of cerebellum: Rusheweyh et al. Altered cerebellar pain perception after cerebellar infarction. Pain 2014 (epub doi)
@ John Very interesting. It seems associated representations are more important than things we see. We simulated basic symptomes of the Autism Spectrum Disorders (ASD) before, including spatial attention. I'm interested if your models of spatial awareness may be useful in development of our research on ASD?
@ Andrew, in my opinion there are many causes to develope computational brain models. You are right that words "compiutational neuroscience" may be overused, but we should use this popularity to develope novel, more adequate tools to understand how CNS works. I hope XXI century will be time of IT-based medicine and artificial intelligence. Lack of grants (=money) is common problem of all scientists, but Human Brain Project launched with hight budget 1.190 billion €, 135 institutions, 7000 scientists. Moreover fortunately computational simulations are cheaper, and basically preliminary research may be provided almost without any costs. So such simulations is selected cases may be more common that conventional experimental studies (where available). Please take into consideration that it is assessed we are data rich but theory poor, data rich but information poor. We have data sets, but sometimes we can understand it rather thanks to data mining than own minds. This situation will be more common in the future. Huge amounts of data generated by our novel tools (see CERN's LHC, but also fMRI, BCIs, neuroprostheses, etc.) need for another solution providing full understanding. We think Hodgkin-Huxley model is only an useful approximation, but maybe more detailed for more complicated neurons will need another simplification through models to understand it? We don't know - maybe some of CNS processes will be so detailed and complicated that direct analysis will be far beyond of our understanding. Thus computational data analysis and simplification may be necessary. We don't fully understand EEG despite we use it a long time, now we begin to use MEG and BCIs. But we need such solutions - it may cause next breakthrough in health sciences and neuroprosthetics, many disabled people wait for them. You are right - our knowledge may be regarded as incomplete but it is not cause to do nothing, it should rather stimulate scientists and clinicians to provide more research. You are psychologist, and I am engineer (IT, biocybernetics), and we maybe look for different solutions for the same problems, within our disciplines. But we should cooperate since we have the same goal - better understanding. I'm deeply convinced role of engineers in medicine will increase, that's my field of research and scientific cooperation. Neurobiological relevence is key, but using models we can scalling both function and structure. From the other side: computational methods don't solve all problems, or even most of them. Every question needs appropriate scientific answer, thus there is lack of universal artificial tools. But our brain is such universal natural tool. This is very general and not complete view but it may need to write a book to provide better argumentation. It seems it would be better to conduct more research and publish their results :)
Now, let me give you some context: I am focused on building a "new Gate Control System", a novel architecture which reflects real connections beteween primary afferent neurons, interneurons at the dorsal horn (excitatory and inhibitory), and descending projections, specially from the RVM. The latter have been the most difficult to me!
Several weeks ago, I opened a Topic related to this issue in this website. As far I know, RVM provides serotonergic input to superficial dorsal horn (SDH), but serotonin (5-HT) was found exclusively in a subset of neutral cells (Potrevic, 1994), whose role in nociceptive modulation remains unclear. On the other hand, GAD67 immunoreactivity was found in most OFF-cells and less frequently in ON-cells, suggesting that the majority of the OFF-cells are GABAergic, and some ON-cells (Clayton, 2006). The question is: from among all SDH neurons, which ones receive RVM descending projections? Islet? Vertical? Transient Central?
I attached a circuit proposed by several authors (Zeilhofer, 2012), so that you can see where i'm pointing... If you can help me with this issue, I appreciate it!
@ John Very interesting. It seems associated representations are more important than things we see. We simulated basic symptomes of the Autism Spectrum Disorders (ASD) before, including spatial attention. I'm interested if your models of spatial awareness may be useful in development of our research on ASD?
Not surge if you have seen these?
Health implications of non-perceptually structured content and screen technology http://youtu.be/6gizIuLL9mg
Typical and atypical perceptual structures: Potential links to ASD and stroke related conditions http://youtu.be/Pss3UOoiuyQ
One of the things we need to get beyond is "representations". There is very little of that going on….its 'presentations'. There aren't any 'lines' or 'edges' etc to detect from the retina. There isn't a picture on the retina to reproduce. Lines and edges etc, are conceptualisations drawn from all the various cues that build towards the such realisations. The 'lines' in a Cezanne painting are on on the surface of the work. They are the last consideration not the first. Vision being essentially 'diagnostic' in nature and 'one way. If you were to work back from the percept you would not end up with a 'picture' aligning with optical projection.
Some deficiencies of pictorial space with respect to the structure of phenomenal field
http://youtu.be/i2a5lVz6DBE
This is highly relevant to ASD I think. Their systems generate a different reality. It's not messed up 'representation' its just a different 'presentation' coming from different diagnostics? That different presentation generates a unique reality. I think that ASD related conditions have issues with the generation of the implicit spatial field that allows us the ability to effortlessly and subconsciously see ourselves in relation to the world and others? That ability to effortlessly contextualise that comes with our implicit sense of being embedded in the world and connected with others. This 'omission' having to be compensated for through the 'explicit' channels competing with the tasks it naturally undertakes associated with our explicit take on reality. This doubling up of function through just one channel causes overload and inability to concentrate on the job in hand.
What I think will happen IF we can get to grips with what a 'typical' perceptual structure generates through understanding phenomenal field and what gives rise to that, is we should be able to double guess what occurs with an atypical perpetual structure. What's missing and what's become more/over developed in order to compensate. There is a chance that Vision-Space could provide immersive therapies? Even tiger pathways that have failed to form?
The problem we have in the Vision-Space team is getting the research cash together to create a reliable VS programming architecture capable of modelling typical perceptual structure to act as the platform or 'base camp'. We need the 'kit' before we can get serious about tackling and evaluating the related issues. The research institutes want the evaluation to justify the cost of funding the kit required to underrate the evaluation!!!
If the two research bids we have pending are accepted we could be able to provide researches with the system to undertake the evaluation projects. We are always looking for partners etc to help to bring this about.