Miguel Nicolelis and Ronald Cicurel claim that the brain is relativistic and cannot be simulated by a Turing Machine which is contrary to well marketed ideas of simulating /mapping the whole brain https://www.youtube.com/watch?v=7HYQsJUkyHQ If it cannot be simulated on digital computers what is the solution to understand the brain language?
Dear friends, Dorian, Dick, Roman, Mario, et al.,
There is hope (Haykin and Fuster, Proc. IEEE, 102, 608-628, 2014). But first we have to modify our computer and some of our traditional ideas about the brain. With respect to both, here I offer humbly some of my views after half a century of working in cognitive neuroscience. I shall be brief and cautious. For an account of empirical evidence, read my “Cortex and Mind” (Oxford, 2003).
1. Alas, the computer cannot be only digital, but also must be analog. Most all the cognitive operation in the brain are based on analog transactions at many levels (membrane potentials, spike frequencies, firing thresholds, metabolic gradients, dendritic potentials, neurotransmitter concentrations, synaptic weights, etc., etc.). Further, the computer must be able to compute and work with probabilities, because cognition is largely probabilistic in the Bayesian sense, which means that our computer must also have a degree of plasticity.
2. The computer must also have distributed memory. In the brain, especially the cortex, cognitive information is contained in a complex system of distributed, interactive and overlapping neuronal networks formed by Hebbian rules by association between temporally coincident inputs (i.e., sensory stimuli or inputs from other activated networks). The cognitive “code” is therefore essentially relational or relativistic, and is defined by connective structure, by associations of context and temporal coincidence. That is why, theoretically, connectionism and the connectome make some sense.
3. It is true that the soma of a neuron contains “memory”: in the mitochondria. But that is genetic memory (what I call “phyletic memory,” memory of the species), some of which was acquired in evolution. It is important for brain development and for the function of primary sensory and motor systems. It is also important for regeneration after injury. Further, it is the ground-base on which individual cognitive memory will be formed. But the latter consists of more or less widely distributed cortical networks or “cognits” (J. Cog. Neurosci. 21, 2047–2072, 2009). These overlap and interact to a large extent, whereby a neuron or group of neurons practically anywhere in the cortex can be part of many networks, thus many memories or items of knowledge. This is trouble for the connectome which, if ever comes to fruition, will be vastly more complex than the genome.
4. Our present tools to define the structure, let alone the dynamics, of the connectome appear rather inadequate to deal with those facts and hypotheses. Consider DTI (diffusor tensor imaging), one of those tools presently in fashion and widely used to trace neural connections. It is based on the analysis of the orientation of water molecules in a magnetic field. Therefore, it can successfully visualize nerve fibers with high water content, such as myelinated fibers and some large unmyelinated ones. But the method (I dub it “water-based paint”) is good for tractography, for visualizing large, fast conducting fibers, but not for the fine connective stuff that defines memory networks.
5. What’s more, those networks change all the time, even during sleep. In sum, it is difficult to imagine a dynamic connectome that would instantiate the vicissitudes and idiosyncrasies of our thinking, remembering, perceiving, calculating and predictive brain.
Some of this may be wrong. But that’s the way I see it, and may be useful to model the real brain. Cheers, Joaquín
Dear Dorian, our thesis and what we show in our book is that you cannot simulate a mammal brain on a digital machine at a level where you generate the higher level function. Obviously if you are neglecting some phenomena you can produce a simulation. The distributed coding and the degeneracy principle that Miguel Nicolelis has experimentally demonstrated as well as the HDACE are typically analog and non simulable. I suppose that when certain groups pretend they are simulating the brain they are limiting themselves to certain digital functions that may not be totally relevant. We also show that there is nothing like a "brain code", that means that there is nothing like a language in the sense we usually use this word. De facto information processing follows in the brain totally different rules that the usual Shannon-Turing information. We have baptised this Gödelian information. It has it's own different rules who are close rot quantum information. The book indicates the route we suggest to go.
Dorian, I believe that a preliminary computer model that utilizes the "principals " of how brain and body interact and program this system to certain limited external sensory information can be made. The "human " like response to external inputs can also be done. Such a preliminary system after development and testing can be refined for higher processing and intelligence.
Dear Ronald,
An Anonymous reviewer on Amazon claims that your book transcribes McFadden's theory . Is that true?
I think the question is misphrased, and founders on the difference between simulation and emulation. Yes we can simulate the whole brain on a digital computer, it would take a much larger computer than any that are currently available. However emulation of the brain might take a different architecture than current von Neuman machines.
The difference lies in the ability to copy the functions of the brain well. Emulation holds a much higher standard than simulation, a simulation can simply echo a portion of the functions of the brain, which is all that current simulations can do because we don't yet have a large enough computer to do more.
Part of the issue, is simply the nature of science, and "Falsifiability" the way that science tests to make sure that what it assumes is necessary is truly necessary, scientists make expensive tests to prove the necessity of more complex assumptions.
Right now we are looking at the difference between expensive but necessary tests, and well held assumptions. Many people have made the assumption that it is impossible to simulate the whole brain. Some researchers have assumed that it is not only possible to simulate the whole brain but that all they need to do is simulate the connections between the neurons. Thus proving that connectionism is necessary and sufficient to simulate the brain. Failing this test will only prove that connectionism is not sufficient to simulate the whole brain.
In other words, failing this test will disprove connectionism. I believe the choice not to pursue the test merely indicates that it is a consensus of the neuro-science community that connectionism fails to meet the needs of full brain emulation. The test is deemed unnecessarily expensive, and less than informative. This is not the last word on simulating the brain, it is merely a test of connectionism since it is probably the consensus of the neuro-science community that connectionism is a null issue and never could have simulated the brain, the test is redundant.
Theophanes :"Interesting but how does one measures the "awareness" of the emulation?"
You raise interesting questions Theo, but it is outside the question of whether we can simulate the whole brain in a digital computer. Whether we can emulate the whole brain in a digital computer is another question entirely, and how to measure awareness is a whole 'nother ball of worms. I will leave unquestioned the ethics of creating a disembodied consciousness because we are far from creating such.
Dear Graeme ,
"Expensive but necessary tests.... Failing this test will only prove that connectionism is not sufficient to simulate the whole brain."
The book already states that the simulation on digital computers cannot generate the higher level function. Why do you need to perform "expensive tests" when we already know the outcome?
Well, apparently, there are many scientists are optimistic: one of the pillars of the European Union Brain initiative is to build a small scale simulator and work towards that:
https://www.humanbrainproject.eu/brain-simulation-platform1
Since then we have a significant amount of information on what the brain does do perhaps not everything was wasted Roman.
I think it is pointless to complain about there being little theory when you throw out every attempt to make a theory that doesn't match your particular belief systems. What you describe as hype are attractive points where models have been created that are not complete, or completely accurate. It takes a number of models to suggest a theory but by throwing the baby out with the bathwater, you assure that no models get adopted, and therefore a theory can't emerge. Everyone must be a high level mathematician in your mindset or they are wrong. Hype happens because there are initial indications of correct, or at least approximate activity. It gives those of us that are not high level mathematicians the hope that we might understand the brain, even a little bit, without the soaring complexities of dynamic continuity. Sure our attempts are only approximations but in many cases they are good approximations. Connectionism is wrong, not because of the Hype, but because it attempts to capture human brain function with primitive neuron theory based on early neural development. There quite simply is a lot of difference between giant squid development of neurons and complex mammalian cells. To throw out connectionism is to assume that it has no information at all, when what is needed is mostly an upgrade in neural complexity.
Dear Graeme et al,
" Connectionism is wrong, not because of the Hype........."
Hasan indicates that "a small scale simulator in EU" may repair the issue generated by previous attempt . Is that right?
One may accept that simulation/mapping the whole brain on a digital computer is a misleading path, a result of miscommunication between scientists with very different backgrounds. Since our brain includes different (non-Turing) forms of computation the simulation/emulation on a digital system is limited.
What would be the solution?
Dear Dorian,
We do cite John John Mc Fadden theory in our book as well as other approaches using EM brain fields, The relativistic brain is originally based on experimental data and complemented by a mathematical approach. I suppose your anonymous reader did not really read our book. Hope this helps, best, Ronald
I think that if you spend all your effort getting mathematicians and physicists involved, you will end up waiting centuries to get your theories for your theoretical applications and you will end up getting sidetracked into things like consciousness as a theoretical constant. At least with computer scientists you can test as soon as the pieces come into play, and throw the theories out once they are proven not necessay or sufficient. What we have learned is that connectionism isn't sufficient. Perhaps computation isn't sufficient either and perhaps we simply will get nowhere with dynamics because there aren't enough Mathematicians and physicists to do the grunt work.
Dear Ronald,
a. "I suppose your anonymous reader did not really read our book. "
The actual crisis of Human Brain Project (HBP) appears to be generated by a lack of funding in cognitive projects in EU rather than a scientific dismissal of the project. You speak about the danger of becoming a robot. Do you feel that everyone involved in HBP or similar projects can understand your book?
b.The second reviewer read the book however is unhappy since he/she didn’t find a concise argument of why Turing machines wouldn't be equivalent to human brains.” Can you provide a concise argument?
Colin "Look elsewhere, beginning with a definitive categorical analysis of assumptions for the past 60 years."
I strongly feel that this should be the starting point and I'm open to any reliable proposal to reshape our projects
Dear Dorian,
a. I do not know many of the people involved with the Human Brain Project, I participated 10 years ago to the launch of the Blue Brain with Henry Markram. I am not able to answer you first question. You are correct The dismissal is rather due to managerial and political reasons, I did not see any scientific objections. Most neuroscientist believe in computationalism.
b. I thought that the book was concise enough! Plus it suggest many new research directions, so . But if you want a unique argument I would summers it by saying that for hierarchical complex adaptive systems like the brain, approximation is not good enough, digital procedures will diverge immediately.
Dear Ronald,
“Most neuroscientist believe in computationalism….The approximation is not good enough, digital procedures will diverge immediately”.
Even this is a refined view from mathematics it should be well perceived in computational science.
"I participated 10 years ago to the launch of the Blue Brain with Henry Markram"
1. Do you feel that this explanation can be understood in neurobiology, cognitive science, psychology....?
While many scientist will agree that digital procedures will diverge they may be tempted to try a mixed digital/ analogic implementation (e.g. neuromorphic chips) as a direct step to “singularity”.
2. Is this new/old trend of neuromorphic approach a reliable solution for brain simulation and "singularity"?
In computational science the explanation would be simple. In addition to temporal patterns (firing rate, ISI, ) we identified electric charges as the main carrier of Shannon /non-Shannon, semantic information in the brain and the simulation of many electric charges would slow down any digital computer. The hypothesis that electric charges carry information is reinforced by experimental data - action potentials are non-stereotyped events. Probably the entire computational paradigm needs to be changed however I feel that you are right in this case "the dismissal is rather due to managerial and political reasons"
I think that that is a good example of the limits of computational neuroscience when a neuron is reduced to a radix nine network. We limit our models because we can't do the math with larger models. The computational load is too great.
Colin, Graeme et al,
Regarding professional modelers versus computationalists it should be no conflict. They need to collaborate since “we can't do the math with larger models” and also there are limits in simulating these models either using analog or digital computers.
In fact lately professional modelers and computationalists were forced to faithfully replicate temporal patterns (firing rate, ISI, STDP) and "connectivity". This misleading view is included in our textbooks and it is at the origins of the EU project
However, initially neither computationalists nor professional modelers did hypothesize that modelling temporal patterns and “connectivity” would replicate the human brain.
1.Where did this naive view come from?
2.If the computational load is too great what is the solution?
Actually, the idea of fully emulating the brain, came from the military as a way of proving that the models were accurate. The idea was that if they got the timing right, the model was correct even if they were playing fast and loose with the timing. For this reason neurons were slowed down in order to meet the standard timing by wait loops which is one reason why the computation models are so limited.
It was simply applied too early in the modelling cycle.
In other words, the simplicity of the model is enforced by the quantity of calculations needed to be done, by super simplifying the gate you increase the number of calculatinns and therefore the number of neurons you can model, at the expense of any attempt to actually emulate the brain.
Dear Graeme ,
" therefore the number of neurons you can model, at the expense of any attempt to actually emulate the brain."
Therefore, this approach from HBP and other similar initiatives may not be the right solution to emulate the human brain.
One of the problems I see with this design is that the number of neurons of such basic models tends to be limited in fan-out to 9 dendrites. in essence as has been suggested this model does not even capture the complexity of one dendrite you have to assume that the neural function is nested quite deep to capture a complete neuron.
Thus neural functions such as this, do not have any relationship in numbers to the actual number of neurons in the whole brain simulation. This suggests that estimates of how many neurons a model has have a limited utility in deciding how many neurons to have in an emulation.
Graeme: " ....the idea of fully emulating the brain, came from the military...."
Do you feel that Markram and so many others borrowed this naive idea, presented it as their own view/project in EU and got immediately approved?
Then Miguel Nicolelis, Ronald Cicurel and others should put up a heavy fight to reshape this naive view
Simulating a nerve cell is more than just simulate one part of the input/output system of the living biological nerve cell. I find the idea interesting, but really dont believe a simulated brain will work like a biological one. The brain is more complex than just looking at the electrical interaction and spikes or load potentials. There is also a chemical system involved, which makes it much more complex. Maybe it is not necessary but for some purpose it is there.
Looking at evolution process, for all our body parts there existed something in our environment that produced a specialized organ to help the species to better survive. There is electromagnetism that leads to an eye (to see). There is accustic waves that lead to ears (to hear), there are chemicals transported by air, that lead to noses (that enables to smell). There is the friction and gravity, that enables us to walk, grab and creep etc... I am wondering and have not seen a clear answer on this: "What force or phenomenon has been identified by evolution" that cause the brain or nerve cells? If this could be identified, I am convinced that we could simulate the brain.
Roman,
thank you for that information. Definitly I will buy that book.
;-)
Wolfram: "The brain is more complex than just looking at the electrical interaction and spikes or load potentials"
I completely agree and the goal of HBP was interdisciplinary
(i)Neuroscience: Understand the brain language
(ii) Computing: develop supercomputing technologies
(iii)Medicine: Develop therapies
However if (i) is not reliably fulfilled then the chance to solve (ii) or (iii) is highly diminished. What would be your approach?
Note: Brain language is just a metaphor
This is an interesting philosophical discussion with a lot of different aspects. If it is all about "deciphering the brain language" (also a methapher) we would be able to describe and proof ourself in a somewhat formal way. Is that possible? I don't know, but than we will invalidate Goedel's proof, that no formal system can proove itself. I believe that Goedel's proof is also true for biological systems (again another discussion).
In my opinion, it should be impossible as the brain doesn't work like a digital computer. Lots of brain functions are tuned by "wet" mechanisms, and driven by its integration in the specific body it grew up with. Other difference: it seems that the brain does not make a difference between hardware and software. Wolf Singer, formerly MPI for Brain Research, Frankfurt, Germany, suggests in my understanding that we might need the help of n-dimensional maths to advance.
Yes Colin, but why is it so important to have an exact bivalent solution? In fuzzy logic, we can work with non-exact populations of solutions.
Theophanes: "...you would have first to find out how to build everything from the Planck scale upwards" - hard to simulate on digital computers
Dirk: "Lots of brain functions are tuned by wet mechanisms and driven by its integration in the specific body it grew up with" - hard to simulate on digital computers
Both are excellent observations.
In fact only after three years since HBP started we found that digital simulations cannot fully replicate the whole brain. In addition the proposal to downgrade HBP kills the entire project and it is not a solution for (i) and implicitly (ii) and (iii). Having a reliable model of the brain that can be thoroughly studied is highly important.
a.Is the brain computer interface (BCI) technique the only solution to achieve this goal?
b.Can we have a better solution?
Note: Clearly, math, physics, information technology have to be included in any attempt to build a brain model
The brain is not a computer, nor does it work like one:
1. The information in the brain is content-addressable, the computer's is not (the Internet's is, however).
2. The brain processes information largely in parallel, the computer does not.
3. The cognitive (cortical) code is essentially a relational code. Information is defined by connections between different groups of neurons, which vary greatly in the time domain. No computer can handle that. No "connectome" can simulate the spatio-temporal dynamics of that distributed system in an environment of constantly changing synaptic weights and chemical transactions.
Why do we always insist on cortical codes. A significant amount of the brain is involved in thinking, and the cortex is just the surface tissue of that function. While much of what Joachim says is true, The HBP did not limit itself to a serial von neuman architecture. The effect was that some of his objections do not make sense( 2 for instance). However the minimal modelling involved in the Neural Function makes a laugh out of estimates of how many cells would be needed for a full brain emulation.
Graeme : "The effect was that some of his objections do not make sense( 2 for instance)."
Even in case of 2, it may not be so simple since parallel computing is not necessarily the property included in biological "parallelism" which starts "from the Planck scale upwards"
Dear Graeme and Dorian:
I am willing to update my apparently outdated views about the functional architecture of computers and admit that, indeed, some modern computers process information in parallel. Further the only biological parallelism I can invoke with confidence is that of some sensory systems like vision and touch. Maybe, to answer your objection, at least I should qualify with constraints point #2. But to address other brain/computer differences I should add:
4. Much of the brain, especially in its higher levels (e.g. associative cortex) works in a probabilistic manner. Whatever computation takes place there (notably in perception) is to a large extent Bayesian, based on prior history (i.e.,memory) and hypotheses. Modern computers, I venture, have a hard time computing probabilities, especially when they are predicated on many prior probabilities.
5. The prefrontal cortex makes the brain a predictive and pre-adaptive system. Both qualities are hard to implement in a computer, again because they are "multifactorial"and based on history. For the computer, the future is at best elusive.
As far as I know, thinking and reasoning (both induction and deduction) are functions of the cerebral cortex. That does mean they are inaccessible to influences from other brain structures. On the contrary, limbic and subcortical structures very much modulate, if not determine, the way we think or reason. Indeed, the cerebral cortex is surface tissue--of the brain of course--but not of thinking, which goes on within it....I think. Cheers, Joaquín
It may be "Not so simple" as you suggest, Dorian, but that depends on agreed models some of the Parallelism at the plank scale is damped out by the "Wet, warm, squishy" structure of the tissues involved. After all a neuron is just a soft squishy bag of chemicals. We can safely ignore anything that does not impact the dynamics. We may even be able to ignore some things that only minorly impact the dynamics. What we can safely ignore is as yet an issue. Besides, what we can compute in some future computer such as a quantum computer or an advanced analog computer is yet to be determined. Digital computers are just the current choice of technology. They allow an advanced approximation if their model is accurate enough. What seems obvious is that the computing industry has let their models get too far from the reality and that there is a real movement against the inaccuracies that have been allowed.
Dear Graeme and Dorian:
Serious grammatical error in my last message to you! "This does mean..." should read "This does not mean..." I suppose you detected it by reading the rest of the paragraph. Joaquin
Another important direction being touched with in this discussion ''connectome Vs Dynome''. I just wonder why not acknowledge the interdependence of these two rather viewing them antagonistically. I mean from the perspective of activity dependent neuro-genesis and other properties of neurons; how could we simulate such neuronal characteristic without considering connectomic data?
Dirk: "...we might need the help of n-dimensional maths to advance"
Theophanes: "If the brain does not compute at all how is it...."
At this point in time I feel that the conflict between math and computer science is highly artificial and the separation between similar fields does not appear to be constructive. In fact many problems are generated from miscommunication (e.g. non-Turing is not equivalent to non-computation).
Technologically this "wet, warm, squishy structure" outperforms anything that we know and a practical solution to build a reliable human brain model as desired in HBP is missing.
What would be a practical approach to fulfill the grand vision of Human Brain Project?
Maybe I have lost something but, don't we have just discovered that neurons also memorize in the soma? have we already completely understood one cortical column's architecture? The last time I read neuroscience (i.e. yesterday) we didn't know how the dynamic networks codify concepts nor how they get into executive systems to be combined to find adaptive behaviors. More if we try to talk about consciousness.
In that sense,I think that it will be much more difficult to modelize and simulate the biased behaviors, those that can be nonadaptive and that mainly define us as humans (i.e. emotions) and how they define our "subjective reality", that for which we take decissions and the one we try to adapt to. All bayesian models of reality are always weighted through the emotional filters.
Dear Mario,
Would you like to explain how
(1) " neurons memorize in their soma" and
(2) how the column architecture was completely understood?
Hi Dorian,
I have recently read an article about how neurons store information not only by modifying synaptic connectivity and their strenght, but also modifying their genetic or epigenetic expression (I don't remember well, as I read it lightly and it is not my field of expertise). I will try to find the reference and send it to you.
Regarding your point 2, and as for sure you well know, we are still far from understanding the physical architecture of a single cortical column, nor its functional architecture. In that sense, and given that we are still discovering important mechanisms in the more single elements like neurons, I think we are very far to be able to modelize a full brain. Anyway, I think we should try it, because in the path we will find things that can suppose important advances for many other fields. But I'm sure we (you and me) are not going to see a fully digital simulation of a brain, or a quantum or an analogical one. But it is just an opinion.
Dear Theophanes,
"What if all these noisy square pulse like switching ion gating activity over the whole membrane....."
They may be "noisy" only by our standards since in general what does not make real sense for us is considered to be noise.
Hi Dorian,
re Mario's point, neurons memorize in their soma, one possibility as to that could be that the cytoplasmic constitution of neurons, or at least certain types of neurons, includes molecular components capable of holographic representation of post-synaptic input. For a paper expounding this theory, there's this, by Ursula Ehrfeld:
https://www.researchgate.net/publication/268035659_An_Attempt_to_Describe_Memory_as_a_Hologram_of_Brain_Waves_and_Oscillations_Essay
and this, which is an introductory to the Essay:
https://www.researchgate.net/publication/262914770_From_Physics_to_Neuroscience_-_An_Analogy_between_Optical_Holography_and_Memory_-_An_Attempt_to_Reanimate_a_Fascinating_Idea
Article An Attempt to Describe Memory as a Hologram of Brain Waves a...
Article From Physics to Neuroscience - An Analogy between Optical Ho...
I do not think we can fully simulate brain activity . The main issues are that it is not just the brain , all the body cells take part in brain activity. The brain is under the influence of the body by autonomic nervous system "unconscious" part of mind. It is also under the influence of visceral sensations from gut, heart and lungs. Please see our recent publications.
https://www.researchgate.net/publication/273001608_Functional_representation_of_vision_within_the_mind_A_visual_consciousness_model_based_in_3D_default_space
https://www.researchgate.net/publication/275641578_Layers_of_human_brain_activity_a_functional_model_based_on_the_default_mode_network_and_slow_oscillations
We can however make initial template that addresses visual , auditory oscillations based model that can duplicate the mechanism to a certain extent . It can be subject to further functions.
Article Functional representation of vision within the mind: A visua...
Article Layers of human brain activity: A functional model based on ...
Roman, Certainly you have your opinion but my concern is that before we try to simulate brain we need to be able to simulate a single human cell . Before that it is like planning to build a manned rocket for landing on the Mars without first creating a jet engine.
Richard: "molecular components are capable of holographic representation..."
Indeed, molecular machinery within neurons and synapses can be at the origin of a holographic distributed representation of meaningful information - the entire hypothesis makes sense
Theophanes: How one goes about to get a detailed, say "memory map" of their overall activity with today's technology?
That's a very good question, who has the technology to reliable map the whole brain from a molecular level
Ravinder: 1. all the body cells take part in brain activity.... 2. we need to be able to simulate a single human cell
I understand the issue however I'm not sure if (1) is entirely true. The second task is highly difficult . Once the cell is removed from its environment for study the cell does not keep its function. One needs an "intact" functional brain to understand the role of a single neuron.
Hi Dorian
I think you have just point out the keystone of the problem. As somebody comented before, and as it happens in a neural network, information becomes "integrated" into a global system while, at the same time, that information also regulates the evolution of the system itself. We don't know where in a neural network the info is stored, but we can know the function it applies to the afferent signals by exploring the outputs. As every brain is developed through a unique set of stimuli, which in turn conditions the evolution of the system, the only way to simulate a whole brain, even if we had the right model, would be making it evolve in a controlled environement, taking note of every stimulus it receives along time. Once again, I think it is out of our current and our mid and long term capacities.
In that sense, and continuing my reasoning, we can suppose that the brain is a non-linear dynamic system, which, even if deterministic, the only way to know if it displays a given behavior is to run the complete simulation, what takes us again to the square one.
Dorian, From last communication
Ravinder: 1. all the body cells take part in brain activity.... 2. we need to be able to simulate a single human cell
I understand the issue however I'm not sure if (1) is entirely true. The second task is highly difficult . Once the cell is removed from its environment for study the cell does not keep its function. One needs an "intact" functional brain to understand the role of a single neuron.
You are correct regarding intact functional brain. I had mentioned simulating a single human cell to make the point that it is difficult even simulate a single cell on the computer, the brain would be even more challenging. Many of my articles i have shown that entire body homeo dynamic changes underlie how we sleep and lose consciousness and wake up to regain it .
Dear friends, Dorian, Dick, Roman, Mario, et al.,
There is hope (Haykin and Fuster, Proc. IEEE, 102, 608-628, 2014). But first we have to modify our computer and some of our traditional ideas about the brain. With respect to both, here I offer humbly some of my views after half a century of working in cognitive neuroscience. I shall be brief and cautious. For an account of empirical evidence, read my “Cortex and Mind” (Oxford, 2003).
1. Alas, the computer cannot be only digital, but also must be analog. Most all the cognitive operation in the brain are based on analog transactions at many levels (membrane potentials, spike frequencies, firing thresholds, metabolic gradients, dendritic potentials, neurotransmitter concentrations, synaptic weights, etc., etc.). Further, the computer must be able to compute and work with probabilities, because cognition is largely probabilistic in the Bayesian sense, which means that our computer must also have a degree of plasticity.
2. The computer must also have distributed memory. In the brain, especially the cortex, cognitive information is contained in a complex system of distributed, interactive and overlapping neuronal networks formed by Hebbian rules by association between temporally coincident inputs (i.e., sensory stimuli or inputs from other activated networks). The cognitive “code” is therefore essentially relational or relativistic, and is defined by connective structure, by associations of context and temporal coincidence. That is why, theoretically, connectionism and the connectome make some sense.
3. It is true that the soma of a neuron contains “memory”: in the mitochondria. But that is genetic memory (what I call “phyletic memory,” memory of the species), some of which was acquired in evolution. It is important for brain development and for the function of primary sensory and motor systems. It is also important for regeneration after injury. Further, it is the ground-base on which individual cognitive memory will be formed. But the latter consists of more or less widely distributed cortical networks or “cognits” (J. Cog. Neurosci. 21, 2047–2072, 2009). These overlap and interact to a large extent, whereby a neuron or group of neurons practically anywhere in the cortex can be part of many networks, thus many memories or items of knowledge. This is trouble for the connectome which, if ever comes to fruition, will be vastly more complex than the genome.
4. Our present tools to define the structure, let alone the dynamics, of the connectome appear rather inadequate to deal with those facts and hypotheses. Consider DTI (diffusor tensor imaging), one of those tools presently in fashion and widely used to trace neural connections. It is based on the analysis of the orientation of water molecules in a magnetic field. Therefore, it can successfully visualize nerve fibers with high water content, such as myelinated fibers and some large unmyelinated ones. But the method (I dub it “water-based paint”) is good for tractography, for visualizing large, fast conducting fibers, but not for the fine connective stuff that defines memory networks.
5. What’s more, those networks change all the time, even during sleep. In sum, it is difficult to imagine a dynamic connectome that would instantiate the vicissitudes and idiosyncrasies of our thinking, remembering, perceiving, calculating and predictive brain.
Some of this may be wrong. But that’s the way I see it, and may be useful to model the real brain. Cheers, Joaquín
Dear Roman,
Carol was a smart and caring woman. I worked with her on several doctoral committees and she helped me many times with our statistics.
Now I'll try to answer your questions best I can.
1. Both the genome and cognition (notably memory, knowledge and language) are "codified" (pardon the word) by relationships between finite and well defined elements: genes and gene components in the first, neurons and neuron assemblies in the second. Just from the elementary structural differences between the two, therefore, you can already deduce a greater complexity of a "cognitive connectome" than that of the genome. At the most elementary level, the genome has some 3 billion chemical units in its instruction set, the cortex some 20 billion neurons. At that elementary level, the potential combinatorial power of the cortex to carry and compute information is much greater than that of the genome. Further, however, both have constraints, the first more than the second. In the genome, the units are grouped and orderly already at birth, and the phenotype--despite epigenetic changes--is largely the expression of that primordial order. In cognition, the primordial order is also genetically pre-established in the cytoarchitecture of the cortex of the neonate, but the epigenetic potential for expansion and recombination are enormous, and put to good use by culture, learning and education--thanks, of course, to cortical plasticity, At the dynamic level, I need not ask you to compare the limited versatility and slowness of gene expression with the infinite creativity and alacrity of language expression, for example.
2. The mind is the cortex at work, and consciousness, by definition, the phenomenon of it. At least in my mind!
3. At first blush, my answer to both your questions is no. However, I am willing to concede a possible qualified "yes". Here is why: Given that memory and knowledge are hierarchically organized, and given that high categories of either are represented in networks of higher association cortex (perceptual mainly in PTO, executive mainly in prefrontal), it is possible that refined imaging of whatever kind will some day reveal their functional connectivity. But I don't think that such knowledge would lead us much further than neuropsychology has. I'll tell you, I rather understand the principles of brain function, in general and for the whole brain than any wiring diagram or a new phrenology inferred from it.
Kind regards, Joaquin
While, there may well be a link between advanced consciousness and cortex activity, I am looking at a different source, that reaches back before the advent of the cortex to invertebrate predecessors which had tentacle-eye coordination or foot-shell coordination long before vertebrates. This capability which belongs in the Optical Tectum, and thus in the roof of the brainstem, offers a role in the linking of cause-effect mapping to body sense that in the frog allows tongue ballistics to be integrated into the center. The layers of sophistication that exist on top of this early source of consciousness would gradually integrate the whole brain including the basal ganglia, and cerebellum into levels of consciousness that would not have existed without the original core consciousness suggested in this theory.
A whole brain that is ‘fully’ simulated by definition must be its own consciousness or self-aware entity; it has then broken away from the programmer and is no longer a simulation. The strict answer to the current question might thus be ‘No.’
Digital calculation would mean that this presumed consciousness must at some point turn digitally on from being off, and must then advance or regress in discontinuous or discrete digital stages.
Neurology appears to suggest a different architecture. Two action alternatives are weighed in the lateral frontopolar; values are assigned to entities, with the help of the amygdala, in the medial frontopolar. Working memory is in the dorsolateral prefrontal. Positive and negative outcomes are tracked in the anterior cingulate. The inferior frontal junction on the left steps in with an intention to speak when rules are ambiguous. The inferior frontal junction on the right notices emotional ambiguity and blocks inappropriate automatic responses (with the help of the pre-SMA). The superior parietal on the left works with familiar tools. The superior parietal on the right prepares a choice map that is subject to interruption by the non-cognitive temporoparietal junction on the right and made available as a visuospatial entity to non-cognitive dorsolateral working memory and thus the lateral frontopolar on the right.
The basal ganglia connect these cognitive cortical regions when it comes to action. The hippocampus connects these cognitive cortical regions in regard to monitoring. These two supporting systems interact and operate in parallel, both within and across the hemispheres. Humans emphasize neurogenesis in the dentate gyrus of the hippocampus; there is reported to be an almost complete turnover of neurons in humans in comparison to about a 10% turnover for mice (Ernst, PLOS ONE, 2015). It appears that neurogenesis in the alternate subventricular zone is also a “productive lifelong process”; it is reported to divert in humans away from the olfactory bulb and into the striatal nucleus accumbens (Kempermann, Cell, 2014). The dentate gyrus of the hippocampus and the nucleus accumbens or ventral striatum are both inputs to their respective regions; neurogenesis in humans thus appears to enable coupled self-organization of learned data across both striatal action and hippocampal monitoring as together they coordinate the cognitive cortical regions.
Frequencies resonate under the control of the thalamus. Neuromodulators in the reticular activating system adjust the volume levels of cognitive strategies. The result is consciousness that grows in analog, and not discrete digital steps, moving back softly into sleep as necessary. This subtle self-programming and self-aware parallel processing cognition with its chemical-electrical interface probably could not be duplicated in a dry non-chemical digital architecture limited to discrete on-off discontinuities.
A final problem is the social nature of the human mind. Human beings in isolation do not survive. Full simulation of a whole brain might thus require a multiplicity of equivalent interacting mechanisms; the question now would be the characteristics of the required ‘nurture’ (programmatical parenthood and environmental input) of this delicate architectural ‘nature’ which might ensure that the ‘simulation’ does not slip out of human control. This is a social challenge far removed from anything digital.
In my mind there are two sides to consciousness, the awake aware side, and the phenomenally conscious side. Language is a Cultural Invention. Phenomenally conscious behaviour is invented in the cortex in the form of access consciousness. The combination of the two makes up advanced consciousness. Advanced consciousness is a relative newcomer in evolution, but awake aware consciousness has been part of the issue since some invertebrates invented it in an evolutionarily remote past. Any antecedents of that remote inventive phase can have some minimal form of consciousness. Advanced Consciousness requires a cortex or equivalent, at least on earth evolved species.
Roman: "
phenomenal consciousness, access consciousness, aware consciousness, advanced consciousness etc.
What has happened to the notion of a unitary consciousness?"
Can we fully simulate the whole brain on a digital computer?. Available from: https://www.researchgate.net/post/Can_we_fully_simulate_the_whole_brain_on_a_digital_computer#view=558160ba60614bec968b45f9 [accessed Jun 17, 2015].
Actually Roman, in my integrated consciousness the levels of sophistication are integrated by bypassing previous levels when a more sophisticated level is active.
Self-Consciousness is even more sophisticated than advanced consciousness so the dog can have advanced consciousness because it has a dual cortex system without being self-conscious enough to recognize itself in the mirror.
Dear Mario et al.,
You are right "it takes us again to the square one" and IMO here is the fundamental problem.
We may have two distinct categories:
a.Scientists whom build models, AI theories or simulate "the brain" without having direct knowledge of the real brain, they simple imagine how the brain works . In this case the prospect of fully understanding the brain language and finding effective therapies for brain disease remains uncertain
b. Scientists whom record, analyze data, build brain computer interfaces and try to provide therapy. Even their “access” to the real brain data is highly limited and without a reliable theoretical support understanding the brain language or finding effective therapies for brain disease is painfully slow
Between (a) and (b) it is a huge gap, little collaboration, miscommunication and deep controversies. What would be a practical solution to solve the entire problem?
Dear Dorian:
With all due humility I ask you to consult
On Cognitive Dynamic Systems: Cognitive Neuroscience and Engineering Learning From Each Other. Proc. IEEE, 102, 608-628, 2014
Simon Haykin, J.M. Fuster
Joaquin
Dear Joaquin et al.,
Thank you. I would like your honest opinion. Is your paper in (a) or in category (b)?
Note: My journey in electrical engineering/ automatic systems has started with Adaptive Filter Theory- Simon Haykin , excellent book.
Dear Colin:
No, I cannot. I don't have the skill to do it. The only thing I know is that it should work in highly non-linear fashion, at least with regard to some variables. If it deals with anything close to psychophysics, it should handle power functions and the Weber-Fechner law. Cheers, Joaquín
Dear Joaquin et al.
It may depend on the frame of reference. Here is an example of very different work in this field:
http://www.thedoctorschannel.com/view/cerebral-organoids-grown-from-stem-cells-in-vienna-laboratory_votw/
1.Do you see their work in category (a) or (b)?
2. How would they perceive your paper in (a) or in (b)?
3.Is Madeline Lancaster right -"it will take decades if not centuries to make such brain think" ?
Note a.Scientists whom build models, AI theories or simulate "the brain"... b. Scientists whom record, analyze data, build brain computer interfaces and try to provide therapy...
Colin: "I am interested in mapping an analog neuron network into a digital model "
It seems that you already have a digital model since 01010101, AND OR can be easily implemented directly using algorithms
A short summary
1. It seems that we do not have a reliable model even for a single cell since one needs an "intact" functional brain to understand the role of single neurons - the chicken egg dilemma. Models for parts of the brain (e.g. cortex, hippocampus ) without the whole brain model would be far more challenging – see Ravinder
2. Given current technology and brain efficiency, mapping, simulating or emulating the whole human brain from a molecular level on digital/analog computers is not a realistic project
3. The dismissal of HBP is due to managerial and political reasons(see Roland) and the lack of funding in EU cognitive projects
4.Downgrading HBP to an IT project does not solve the fundamental issue, no serious scientific alternative is proposed to achieve its previous goals, and honestly speaking at this point in time the goals of HBP and other brain initiatives are unattainable
Do we have a solution for the "chicken egg dilemma" in this case? Can we find together a practical alternative?
I think there is a practical alternative. With current advanced computers and 3 D animation software I believe an interactive model of how the brain processes sensory information with in the body. The oscillations from heart , lungs and brain need to be created that change as we breath . As shown in my studies on "pranayama" a deep breathing technique that changes our anxious mind to calm mind we can have preliminary model.
Such a model will also incorporate memory and executive areas of the cortex. The fast brain oscillations at rest and during emotional and thoughtful response can be duplicated at basic levels . This set up can give us model that incorporates the mechanism how our mind works. The phase one model can gradually be upgraded to incorporate more detailed functions of mind.
Another important observation.
In cognitive science each theoretical model touches different parts of the elephant. Weber-Fechner law, Yerkes Dodson law and many similar laws describe some brain characteristics, however they touch just small parts of the elephant.
Neither (a) nor (b) approach alone can fully solve the fundamental problem and the step (c) proposed by Joaquin should be highly different to be the future . The (a) part is over-represented here in this discussion, Joaquin's paper is still in (a) and an unbiased view is very difficult at this point in time. That's the main reason http://www.thedoctorschannel.com/view/cerebral-organoids-grown-from-stem-cells-in-vienna-laboratory_votw/ was presented to balance the entire perspective
Note a.Scientists whom build models, AI theories or simulate "the brain"... b. Scientists whom record, analyze data, build brain computer interfaces and try to provide therapy...
Whatever ,model is adopted in future simulation of mind and brain, cellular membrane potential in neurons and non neurons will need to be the basis of all biological oscillations that generate consciousness. Without stored energy in cells and brain no sleep wake cycle can be conceived. The brain oscillations are also fundamental in functional role of brain. First those need to be understood well . The color coded animation within the mind body model are pre-requisite to such knowledge.
How is the weatherman tell us about weather over any region regarding temperature, rain , snow or wind ? That is animated model of the science that allows us to understand what is in store for us for the day or the week.
Roman this should give you an idea about the phase one I mentioned . Please read multiple articles I have written with illustrations in the last several years. You ,I suppose would know that picture is worth a thousand words. Color animation should be even more !
Roman, Your comment
Regarding your comment on brain oscillations related to consciousness. The answer is a stern no since consciousness does not oscillate.
When we are awake can you tell us what do brain's fast oscillations such as alpha , beta and Gamma are processing ? Also when they quit oscillating and the brain exhibits "Delta " 1 to 3Hz rhythm that we lose consciousness and slide in to Slow Wave sleep every day ?
Dear Ravinder,
"1 to 3Hz rhythm that we lose consciousness and slide in to Slow Wave sleep every day"
I agree the rhythm is highly important, however simulated on a digital computer would be just a small part of the elephant, and we still keep the chicken-egg dilemma - the whole brain is needed to fully understand the activity of a single neuron
a.Why use a digital computer to simulate,map or emulate the whole brain?
• It cannot express all forms of computation that are built within biological structure see neuroelectrodynamics ;
• Needs many megawatts to power the system (huge issue);
• Requires billions of dollars;
• Cannot replicate sleep phases, emotions, consciousness...
• No reliable model for brain diseases.
Mapping, simulating the whole brain is not a realistic solution for our technology
b. Why not shape a biological structure, connect it with a digital computer use machine learning and perform all kinds of computations? - see http://dx.doi.org/10.13140/2.1.2286.5608
• Naturally, emotion, consciousness ....are expressed
• Can be used as a model for therapy for about 600 brain diseases
• Can be connected to a laptop, iPhone uses digital and biological computation together which can make any digital computer highly interactive, sensors and "body parts" can be easily attached
• Far less amount of funding required. Only a small fraction of funding that is currently allocated for the BRAIN Initiative or Human Brain Project in EU would be enough to build the first hybrid system .
"What I cannot create, I do not understand". Such project can be the bucket list for an entire generation of computer scientists, neuroscientists, cognitive scientists, physicists, mathematicians, medical doctors, stem cell researchers whom should collaborate
Through neural networks maybe we can teach computer to have empathy, sympathy, feelings, moods. But we must attach to the computer a machine for crying, a machine for secreting tear, etc.
Theophanes: Could some brain, simulate another?
Very nice question, huge implications for any hybrid approach .Our brain can “simulate” another brain, it is a different process. The behavior of the other is included in the mirror neuron system so certain actions, joy or pain can be “simulated” within the observer's brain. In this case the hybrid system can be the observer
Colin: How many states of neuroelectrodynamics are identifiable?
Any dynamic system can have many observable and hidden states. Importantly, with changes that occur in electrical states one can fully understand the brain and neurological diseases in computational terms. However following current approach (animal models, diseased brain -epilepsy) a full access to the real brain data and various dysfunctions is highly limited.
Colin:Please explain how the brain can function this way as a 35 W light bulb?
Excellent question. It is a different, more powerful model of computation far more efficient than anything we know since everything is built to "compute" from a molecular level . That's why we are in trouble when we try to replicate the whole brain on digital computers
Sutaramo: Through neural networks maybe we can teach computer to have empathy, sympathy, feelings, moods
It was already achieved in the late 1990s see https://en.wikipedia.org/wiki/Kismet_(robot) however that's not a reliable replication since digital computers cannot "feel". Where we can go further with artificial neural networks?
Dear all,
Though I've got a bit lost about some technical discussions, I would like to make some abstraction for trying to find new pathways which can help us.
From a functional standpoint we should take into account that, at every level of analysis, we are going to find the same functional elements implemented in very different ways but with the same function (systems biology). For example, memory is a primitive functional element we find in very different systems such as genetics, inmune system, nervous system, etc... None of them is based in the same physical elements, but all of them offers the same functionality, namely, to store information to process again, quickly and in a more precise way.
In that sense we should start looking for those functional primitives (memory, pattern recognition, feedback, feedforward, etc...), whether digital or analog, and the most effective and efficient architectures they can be implemented in. By doing so, for sure we will find repetitive functional architectures (fractal structure???) to resolve different applied problems, at different levels, using diferent physical elements. In my modest opinion, nature do not change the way of doing things if it works fine enough, and I'm sure that there exist a limited set of effective and efficient architectures available.
I think this approach will facilitate us to isolate the understanding of different levels without the need to integrate, at the same time, all the knowledge we have, from quantum level to the social one.
One of the critical points in this approach is jumping the "gap" between levels, where I think we should pay special attention.
Though my opinion could seem something naive, I have already applied this approach to my own research in affective neuroscience and it simplyfies the work a lot, and offers very powerful and innovative solutions to face the research problems.
Thanks,
Both "yes" and "no". The answer depends on the degree of detail in modeling...
However, the idea is not great at al. What we need indeed is a thorough collection of the principles of the brain functioning and then an assemblage of those principles into the working framework. We are pursuing this way of action since 2011 (see my talk at BICA 2011 in Arlington, here at the Research gate: "DATA FORMATS IN MULTI-NEURONAL SYSTEMS AND BRAIN REVERSE ENGINEERING"). We do not publish much, but I hope that we will have a really tangible progress in a year. If you are interested, come and see us in Moscow any time!
Dorian, I read your article Can we build a conscious machine? This brain and computer model has basic flaws.
1) Brain cannot develop its connection to limbic system without visceral input from body.
2) Our brain needs input from body to "know " self and develop a normal thought process.
3) The sensory organs such as eyes , ears, vocal cords , muscles and skin needs to be functioning for consciousness to have both efferent and afferent inputs and outputs .
4)The blood supply, nutrients and homeostasis is important but they all are subject to change as to how a person thinks or behaves. It will be impossible to create a mind body response without body, have slow wave and REM sleep in the brain without autonomic changes and respiration rhythm changes.
The best bet for a basic model is a computer simulation based on the principals of Consciousness . The phase one should include visual consciousness and mind body response. Later emotional links can be added as more advanced one can be upgraded.
A few years ago urinary bladder was developed in a lab from stem cells. It could not be transplanted and made functional as it did not have the nerve cells or connections to spinal cord.
Consciousness is a global phenomenon in the entire body at a macro and micro level. No words can catch the length and breadth of it. They say picture is worth a thousand words . You can imagine what amount of information can a color animation do in a transparent human body. If the central and peripheral oscillations that are coded in color can be animated in body where the key rhythms of heart , lung and brain are modulated by changing the breathing pattern, we can not only understand the process but also "feel" that it depicts the process as seen from a perspective from within our minds and from outside . I have a vision and at this point can only make color illustrations. I have about 25 illustrations in my last 9 articles on this subject.
If there is a way I can get funding a 3 D "live " testable model can be constructed within a few months .
Ravinder: "Our brain, the limbic system needs input from body to "know " self and develop a normal thought process.....
1."Body parts" can be easily attached and their signals can be more accurately replicated by digital/analog computers than replicating the whole brain
2. Feelings, consciousness, emotions ... cannot be generated in silicon, biological structure has to be included, it has to be part of the system
Theophanes: "What common "deep" structure resurfaces when an elephant understands rhythm and coherency?"
Very nice experiment. Do these elephants dance if the music is differently played? e.g. radio station
BRAIN REVERSE ENGINEERING may have other practical goals than providing brain therapy and a full brain replication is not required. In case of building hybrid systems this technology can help to improve the design of the interface less to solve the fundamental problem
Dorian, I believe we need a pilot project that demonstrates the dynamic link between mind and body in a model with real life oscillations at Hz that are well documented and accepted such as 1Hz for heart .15 Hzfor respiration and brain oscillations from thalamus and brainstem in the range from delta to gamma. (0.3 Hz to 200Hz) . The variation in frequency and depth of breathing would change brain oscillations to shift from anxiety to calm states . This model can be digitalized after testing and upgraded to match any individual emotional IQ.
Dear Ravinder,
"The variation in frequency and depth of breathing would change brain oscillations to shift from anxiety to calm states ."
The experiment can be performed and this part can be easily tested
"This model can be upgraded to match any individual emotional IQ."
Characterizing "emotional IQ" based on recorded rhythms may be a more difficult task. we may find the sponsor for the pilot project once we identify the entity that can benefit from this study
Dorian, At this point we have knowledge if various oscillations from external monitoring. It is well known deep breathing leads to increased heart rate variability that signifies parasympathetic response. It is not known how it happens . The autonomic shift is also seen during sleep. I have proposed a theory based on homeo-dynamic increase or decrease of resting membrane potential by breathing leading to this response. This response also is responsible for changing mind from anxious state to calm state. There are mind body and alternative medicine departments in every major medical school. Such a visual tool can be very useful for them to teach students , residents, patients how meditation and breathing utilizes the unconscious forces to bring relaxation. Almost 60% of office visits are from stress related disorders. How the meditation works is also not known. I have published several articles based on membrane potential voltage theory.
Ravinder: "Almost 60% of office visits are from stress related disorders. How the meditation works is also not known."
If 60% of office visits are from stress related disorders and meditation works why the entire phenomenon is not fully studied? What keeps us from moving forward?
Dorian, Your response :
If 60% of office visits are from stress related disorders and meditation works why the entire phenomenon is not fully studied? What keeps us from moving forward?
It has been inherently difficult to study a live nerve or vascular cell at the same time one is meditating or under stress. There have been numerous studies that show a global response when measured by usual monitoring methods such as BP. heart rate , respiratory rate, EEG , skin resistance and cognitive performance analysis.
The theory for the cellular voltage gated channels modulations underlying the slow wave sleep , REM sleep, stress and meditation response based on the resting membrane potential has been proposed in 2006 and details published in last 12 months by our group. The 2006 is listed at the bottom of the list below and has been cited in 125 subsequent articles. I also have listed several applications of the primary theory in diagnosing several neurological disorders. This theory also applies to answering questions on underlying mechanisms for unknown non -neurological conditions such as pre-eclampsia.
I believe if we had an institution that involved me as a subject to measure the membrane potential changes during my "pranayama " exercises would find a laboratory evidence of this underlying cell and molecular mechanism.
Article: Self-Regulation of Breathing as a Primary Treatment for Anxiety
Ravinder Jerath, Molly W Crawford, Vernon A Barnes, Kyler Harden
[Show abstract]
Applied Psychophysiology and Biofeedback 06/2015; 40(2). DOI:10.1007/s10484-015-9279-8 · 1.13 Impact Factor
Article: Etiology of phantom limb syndrome: Insights from a 3D default space consciousness model
Ravinder Jerath, Molly W. Crawford, Mike Jensen
[Show abstract]
Medical Hypotheses 05/2015; DOI:10.1016/j.mehy.2015.04.025 · 1.15 Impact Factor
Add resources Remove
Article: Layers of human brain activity: a functional model based on the default mode network and slow oscillations
Ravinder Jerath, Molly W. Crawford
[Show abstract]
Frontiers in Human Neuroscience 04/2015; 9. DOI:10.3389/fnhum.2015.00248 · 2.90
Article: Functional representation of vision within the mind: A visual consciousness model based in 3D default space
Ravinder Jerath, Molly W Crawford, Vernon A Barnes
Iranian Journal of Medical Hypotheses and Ideas 03/2015; 13. DOI:10.1016/j.jmhi.2015.02.001
Article: How does the body affect the mind? Role of cardiorespiratory coherence in spectrum of emotions
Ravinder Jerath, Molly W Crawford
[Show abstract]
Advances in Mind-Body Medicine 01/2015;
Article: Widespread depolarization during expiration: A source of respiratory drive?
Ravinder Jerath, Molly W. Crawford, Vernon A. Barnes, Kyler Harden
[Show abstract]
Medical Hypotheses 11/2014; DOI:10.1016/j.mehy.2014.11.010 · 1.15 Impact Factor
Article: Mind-body response and neurophysiological changes during stress and meditation: central role of homeostasis.
R Jerath, V A Barnes, M W Crawford
[Show abstract]
Journal of biological regulators and homeostatic agents 10/2014; 28(4):545-54. · 2.41
Article: Neural correlates of visuospatial consciousness in 3D default space: Insights from contralateral neglect syndrome
Ravinder Jerath, Molly W Crawford
Consciousness and Cognition 07/2014; 28C:81-93. DOI:10.1016/j.concog.2014.06.008
Article: Role of cardiorespiratory synchronization and sleep physiology: effects on membrane potential in the restorative functions of sleep
Ravinder Jerath, Kyler Harden, Molly Crawford, Vernon A. Barnes, Mike Jensen
[Show abstract]
Sleep Medicine 03/2014; 15(3). DOI:10.1016/j.sleep.2013.10.017 · 3.10 Impact Factor
Add resources Remove
Article: Dynamic Change of Awareness during Meditation Techniques: Neural and Physiological Correlates
Ravinder Jerath, Vernon A Barnes, David Dillard-Wright, Shivani Jerath, Brittany Hamilton
[Show abstract]
Frontiers in Human Neuroscience 01/2012; 6:131. DOI:10.3389/fnhum.2012.00131 ·
Book: Pranayama: Converting Stress & Anxiety Into Inner Joy (The Illustrated Guide To Mind-Body Response)
Ravinder Jerath
05/2010; AuthorHouse., ISBN: 978-1-4490-0523-8
Article: Mechanism of development of pre-eclampsia linking breathing disorders to endothelial dysfunction
Ravinder Jerath, Vernon A Barnes, Hossam E Fadel
[Show abstract]
Medical Hypotheses 05/2009; 73(2):163-6. DOI:10.1016/j.mehy.2009.03.007 · 1.15
Article: Augmentation of Mind-body Therapy and Role of Deep Slow Breathing
Ravinder Jerath, Vernon A Barnes
Journal of Complementary and Integrative Medicine 01/2009; 6(1). DOI:10.2202/1553-3840.1299
Article: Physiology of long pranayamic breathing: Neural respiratory elements may provide a mechanism that explains how slow deep breathing shifts the autonomic nervous system
Ravinder Jerath, John W Edry, Vernon A Barnes, Vandna Jerath
[Show abstract]
Medical Hypotheses 02/2006; 67(3):566-71. DOI:10.1016/j.mehy.2006.02.042 · 1.15