If I wanted to be controversial, I will say that Artificial Intelligence it not artificial nor intelligent.
I will just mention few points (but is deserves a more thorough discussion):
(1) For instance geoffrey Hinton who is a renowned researcher in AI told in an interview that he was desesperatly looking for back propagation (which is how weights are computed in a deep neural net based on partial derivatives of loss function) in the brain for decades but did not find any neural circuits doing the same thing ... yet. We can almost be sure that our brain is not working that way. Knowing the complexity of a neuron, the diversity of neuro-transmeters triggering different effects spreading thu the axon connections - we are far far away from the artificial neural net model used nowadays.
(2) Now that we have the algorithms to create and train deep neural nets, i.e., a network with hundreds of layers, a recent article from MIT (The Lottery Ticket Hypothesis : Finding Sparse , Trainable Neural Networks from Jonathan Frankle and Michael Carbin) showed that you can prune up to 90% of the weights and still get your neural net performing very well! This is a very important result because it shows that somehow it is possible to find new algorithms to design small networks that are more efficient than huge nets which need a vast amout of GPU to be trained for days, weeks even months!
(3) A CNN trained to classify mammals in a forrest for instance is not intelligent at all! It is just classifying based on your training data set (assuming that you are using supervised learning). Even though we talk about generalization, it is just a way to say that if you test your CNN with images slightly different from the training data set, it is still performing well. Neural nets are very good approximators of non linear functions.
(4) Not to mention to amount of energy necessary to train neural nets. In a recent study (Energy and Policy Considerations for Deep Learning in NLP) researchers at the University of Massachusetts, Amherst, demonstrated that Deep learning has a terrible carbon footprint.
Even though there is no intelligence in deep learning this is a fantastic tool as I mentioned earlier as an approximator for non linear functions. for instance what you can do with a CNN today was not achievable after decades of research by the Image processing community.
My thinking is that we will quite soon find the limitations of what deep learning can do and it will set the stage for a new wave of research that will probably bring "more intelligence"... this is how research progresses.
Basically, some notions of such models are inspired by natural intelligence. Artificial neural network, for, example, it amazingly mimics the mechanism of the biological neural network, based on the interaction between computational units called neuron.
I would compare computer intelligence to neuronal intelligence to be as predicting a toss of the dice to the relationship of what goes in to what goes out of a black hole. Unfortunately, we lost track of how our computers learn and how they use aquired knowledge to learn even further....yet we designed them, made them and programmed them using rather basic elements. Fundamentally, it seems to me, the complexity of synapses is in no way comparable to the "0" vs 1" state basic to a computer's microprocessors. I still marvel at how a memory is called up in the brain as no doubt there must be more to it than in my cell phone. Looking at the evolution of brains with ever more computing capacity than its ancestors, we forget that each generation adds much more than more neurons. It would seem that there evolves tremendous variability in neuronal processing, so much so that current computers just can't match up and certainly attempting to model the brain using AI seems futile. Of course, cell assemblies don't just compute an answer; they, somehow, put it forward as action motorically, a "feeling" in terms of emotive circuits, and complex interations of the two. Despite all our science we still do not understand which rules apply to all neurons and how much peculiarity violating all the seeming rules characterizes the nature of NON-ARTIFICIAL intelligence. Perhaps the ionic fluxes caused my a local patch of membrane destabilization at a synapse is the whole "transmission" story, but I doubt it. We just don't know....even more so than in the case of computer learning. So many complex and varied signals come through a synaptic cleft-- and it looks to be so stochastic-- that I tend (possibly to my detriment) to avoid data free AI modeling papers while I never avoid papers suggesting possible neuronal transfer of information BASED ON DATA even though admittedly declared to bewild guesses. My reasoning is that I am attempting to understand gross neurological effects suggesting a brain region's function from the pathology in a neuraly generated process (ie, a motor potential-->tension change in a muscle). But I must confess that I am not comfortable translating the biophysics of neuronal depolarization wave changes into the changes in EMG. I can't help feeling that I am missing a great deal of richness in the neuron to muscle current transaction, so it is hard for me to return to the old biophysical constructs of change in Hertz, for example, or discrete shape change in the potentials pre and post-synaptically from prior to post activation via some "controlled" experimental perterbation as opposed to assumed "rest" activity, as I really don't know what that means. I don't think that we're in a position to guess what neurons tell eachother, despite a century of intense research, because we've repeatedly been faking their language pretending that we understand. Do computers really help figure out for us the language of interneuronal communications? Do they really tell us if a neuron said either "yes" or "no" at some micro or macro level across some synaptic membrane as if it's a matter of summation and time decay? And then how much is information and how much is noise? Can we resolve the difference with transmission thresholds? Can we really say that we've gone far beyond the heady days of 1950s psychphysics, recording spike rate and duration as the essence of the message code? I think not, for every impulse that crosses a synapse changes that synapse for a relatively long time. I think it a cop-out to smooth things out statistically or using computer modeling. All this and more, much more, makes me hesitant to focus what's left of my attention on some computer model article or lecture that explains "neuronal processes." Much happens and much changes in the immediate environs of each synaptic cleft where a current passes or resistance and voltage are chemically changed by chemical contents spilling out of bouttons that somehow calcium ions provoke. In neuroendocrinology it seems much easier to understand, as peptide or protein molecules flow out into the blood to snag matching receptors on distant tissues. But in neurobiology, we're really still asking "WHERE'S THE CODE?" n'est pas?
By contrast in computer sciences, every 12 y/o on up is seeking a magical code to make a program that will make him/her so rich as to be able to support mom and dad, the dog and a couple of girlfriends that might get knocked-up along the way. I don't see any young "entrepreneurs" making it big altering "neural codes." Our big success with external stimulation programs via chronic implanted electodes is totally trial and error. And yet, recently our God of basal ganglia neurosurgery seems to be thinking that maybe we ought to go back to blasting VL thalamus on the good side in order to balance off aberrance in VL on the bad side hoping to abate teremor and rigidity. What does it mean when prize winning neuroscience advances are so very often reversed? Perhaps neither our understanding of neuronal ionic membrane transmission, translated into chemical transmission from one neuron to another may be information transfer that we'll never figure out at the micro level. It may be that deep down inside, we all know that, to this day, none of us can speak "neuronese." so we're still at the "now they're talking...now they're not talking" stage, and we're not always sure if that's true. Can we definitively say when a neuron has stopped talking, or when it is talking....or when it's "thinking"?
We're doing a lot better modelling computer information transfer because the distinction between 0 and 1 is the basic message unit and there's no ambiguity there. But in the neuronal traffic-- especially now when considering neuronal plasticity-- that wonderful genius I had the joy to casually sit and talk with, Donald Hebb, has left us with a few principles of plasticity that muddle our notions of information transfer much more than assist our understanding of "neuronese" because electrical differences from pre to post-synaptic membrane current changes are more often dellusions and illusions for us in our "publish or perish" circumstances rather than producing a better understanding of information processing in the neuronal bioprocessors of the CNS. I avoid the modeling papers because they could be right.....but then they could be wrong and I have no way of knowing which. One might argue that thanks to PETA's anti-vivisection campaign, the AI folks have been able to impose their neuromites as neurons in he neurobiology literature. But this is most tragic as LI [living intelligence] is a lot more than AI; or at the very least, we have no way of testing the correctness of AI models for LI!
Maybe if the AI folks would get a better grip on how their level 6 computers and even their "atomic" computers learn, and only then compare that with real neuronal assemblies, then we might look to analogyze. Until then, we are tethered to the ground by all our illusions and delusions-- and above all by how we punish young scientists for down days and dead end experiments.
OF ONE THING I AM SURE: SCIENCE IN THE HANDS OF ENTREPRENEURS LEADS TO ENTREPRENEURIAL FRAUD AND SCIENTISTS FEELING INSECURE LEADS TO SCIENTIFIC FRAUD IN DESPERATION. Giving life-long assured tranquility and security to young scientists that had spent a quarter century jumping through ever higher academic hoops so as to finally get a project of their own is a must if we are to be confident about the scientific world we make and live in. The alternative of AI models funded by entrepreneurs who threaten career execution if the accelerating profit curve is not kept accelerating is like turning nuns into prostitutes. Profiteering has so polluted science with insecurity that, according to one study, about one third of PUBLISHED research is tainted with desperate fraud. This can be avoided as in the past, when the Federal Government, on behalf of the public that stands to benefit from science and AI, to fund the research and secure the stability of proven researchers. Accelerating profit demands put an acceleratory pressure on ARTIFICIAL and a decelatory pressure on INTELLIGENCE. 2008, I would remind, showed us unequivocally what happens when the profit motive virus proliferates in our society. Yet .Gov bailed them all out, yet cuts off young scientists unless they publish the equivalent of a Broadway hit. Every time you take out your little piece of AI, your cell phone, you recklessly announce to both Wall Street and organized crime entrepreneurs: "COME AND GET ME." Soit is safe to conclude that AI has gotten out of control....it just ain't intelligent anymore as the more it knows how to do, the less it knows what it should do other than monetize data. We must stop making elements critical to our survival and that of our planet the playthings of AI profiteers. By publicly funding research and assuring the well being of researchers, openly and consistently, we'll get a hell of a lot bigger AI bang for our buck, putting AI also under careful scrutiny from the Congress we elect. you can't expect AI to self regulate either for its "intelligence" cannot realize when it is a nelpless slave in the hands of evil. HI (Human Intelligence), on the other hand knows full well when it is evil and can be made to take responsibility for it. By treating AI as a tool and HI as the master of that tool, no matter how little we understand HI, at least we know that it can be held responsible for its actions. Yet we know that greed is man's greatest weekness. Since we don't know how to prevent that, only punish it, we would do best to limit the freedom to use AI for profit, at least until we learn how to prevent predatory use of AI as occurs now. Since profit seems to be the common goal of the worst amongst us, we can be sure that preventing the monetization of AI, just as we precent the monetization of publicly funded research, AI must go back to the agencies that invented it. Right now, some "entrepreneur" can know exactly how many pubic hairs each of us has through the AI we wach so avidly put in our private spaces....and it's only the begining. Yet HI we have not even begun to understand. The imbalance is so great that, like guns, AI should be carefully trgulated so that all its operators know that they will be held accointable for what they do with it. By contrast, upon providing scientists and their families a modest but secure opportunity to do their work, we will be assured of responsible behavior. Competition for everything from grants to Nobel prizes invites entrepreneurial-like evil behavior. Far better to assure them a modest but secure future so long as they behave ethically. Recognition is best appreciated from collegues but appreciation must consistently from the nation as a whole for HI as a creator using AI only as a tool. If you allow a man to have a gun when no one else can have one, that man is bound to use it on the gunless. Now that that Silicon Valley had a monopply on AI, it uses it cannibalistically on us by obtaining secrecy for itself and taking privacy away from us. By contrast scientific HI needs openess to function. Fewer AO cell phones and more basic scientific research would benifit us all.
If I wanted to be controversial, I will say that Artificial Intelligence it not artificial nor intelligent.
I will just mention few points (but is deserves a more thorough discussion):
(1) For instance geoffrey Hinton who is a renowned researcher in AI told in an interview that he was desesperatly looking for back propagation (which is how weights are computed in a deep neural net based on partial derivatives of loss function) in the brain for decades but did not find any neural circuits doing the same thing ... yet. We can almost be sure that our brain is not working that way. Knowing the complexity of a neuron, the diversity of neuro-transmeters triggering different effects spreading thu the axon connections - we are far far away from the artificial neural net model used nowadays.
(2) Now that we have the algorithms to create and train deep neural nets, i.e., a network with hundreds of layers, a recent article from MIT (The Lottery Ticket Hypothesis : Finding Sparse , Trainable Neural Networks from Jonathan Frankle and Michael Carbin) showed that you can prune up to 90% of the weights and still get your neural net performing very well! This is a very important result because it shows that somehow it is possible to find new algorithms to design small networks that are more efficient than huge nets which need a vast amout of GPU to be trained for days, weeks even months!
(3) A CNN trained to classify mammals in a forrest for instance is not intelligent at all! It is just classifying based on your training data set (assuming that you are using supervised learning). Even though we talk about generalization, it is just a way to say that if you test your CNN with images slightly different from the training data set, it is still performing well. Neural nets are very good approximators of non linear functions.
(4) Not to mention to amount of energy necessary to train neural nets. In a recent study (Energy and Policy Considerations for Deep Learning in NLP) researchers at the University of Massachusetts, Amherst, demonstrated that Deep learning has a terrible carbon footprint.
Even though there is no intelligence in deep learning this is a fantastic tool as I mentioned earlier as an approximator for non linear functions. for instance what you can do with a CNN today was not achievable after decades of research by the Image processing community.
My thinking is that we will quite soon find the limitations of what deep learning can do and it will set the stage for a new wave of research that will probably bring "more intelligence"... this is how research progresses.
The little we know about how the brain takes in input to process it into output Is never as solid as what a computer does. But then, a computer does what it does so fast and often that it can keep testing its own notions. Looking at us humans, we take decades to do what gradually we decide to do and often find ourselves REGRETTING our actions. Can a computer simulate human regret?
I do not specialize in cognition. However, work i have conducted over the past thirty-five years sheds light on cognition, and much more. As a young person, age 17 i saw a common form for the equations i examined. Then, weeks later i envisioned a new physics that gives us the notions of gravity, EM, etc...
In this new standard model of the physics, nothing truly collides; everything communicates. And, it does so instantly, across any distance. I call it elastic communication and robust entanglement. Here do we see four forms of energy form.
It is in this model of things that we would advance our understanding of cognition. As energy is formed freely, spheres encourage this formation of energy, by shear geometry. As symmetries form, entanglements form atop one another, stepping up the energy until in a tertiary field, the energy shifts back and forth.
Article A Technical Approach to Understanding Cognition
As such, the origin of thought may come from any locale. The cerebrum, a soft tissue extension of the energy created freely, in cells, is seen to "breath" back and forth.
Yet there's more. As we then come to understand the very nature of atoms, as iterative, elastic dances, we question our notions of cellular death. And, here, in these questions do we find a presence within living things, responsible for aging and cellular death.
Article Landtrain Series: Diet
Now, with this new understanding of the atom, and the human condition, are we prepared to reexamine cognition.