No, I don't think so, because as a Muslim I also believe that we have only limited access to our brain so how we create a machine that think better than us. And due to cognitive ability human can increase their brain power that a machine cannot do that's why artificial creature never surpass Human.
Artificial Intelligence (AI) has raced forward in the last few years, championed by a libertarian, tech-loving and science-driven elite. These “transhumanists“ pronounce the eventual victory of the machine over nature. First we will become integrated with chips; and then, perhaps, we will be surpassed by them. This AI-inspired future, with echoes of Blade Runner and Battlestar Galactica, is profoundly depressing for many people, bringing with it a world where human creativity and uniqueness has been replaced by the standardization of robots.
The transhumanist vision is premised on the belief that brains are essentially computers. That AI-fans are inspired by this idea is not surprising, given that many have made obscene amounts of money building silicon-based machines; or the algorithms that run on them. Algorithms underpin the entire business of the internet, powering the might of Google, Facebook and Netflix. They are unique bits of code that make computations. They serve up adverts, content or services to us users based on the results of these computations. AI advocates think that once computers have sufficiently advanced algorithms, they will be able to enhance, and then replicate, the human mind.
However, this seductive belief is rooted more in metaphor than reality.
Humanity has always approached cognition through the rule metaphor of the day. The ancients thought about the mind in terms of humors. Early Modern christians, like Rene Descartes, saw our mind as something intangible, probably to do with God. In the Industrial Age we saw the brain finally becoming a machine. First, a kind of steam engine; then a telephone exchange; and finally a computer (or network of them).
Yet the computer metaphor ignores perhaps the most species-defining characteristic of human beings: That we can create things; and we can do so consciously. Not only can we create concepts, business models and ideas; every single human cell can create itself! Yet no machine, no matter how flashy, has ever been able to do this. No scientific theory has fully explained how life creates itself; and where this creativity comes from. Great scientists like Erwin Schrödinger have expressed profound curiosity about how life can buck the great laws of physics, notably that of entropy, the 2nd law of thermodynamics.
Mainstream science claims that the universe works according to fixed rules, discovered by Newton, Faraday and Maxwell. This is the universe as machine. Yet here is the doozy: Whilst our most advanced machines, algorithms, make complex calculations according to a series of rules, disruptive innovators and genius creatives — the kind that birth new business models like AirBnB and new forms of art like Guernica — break the rules. And we can all enjoy this kinds of rule-defying breakthroughs every time we conquer habit and speak to our lover in a new way; or break free of the past by following a new passion.
So where do those breakthroughs — these disruptive innovations — come from? Well, if they came from an algorithmic brain, then surely we would already have been able to access those outcomes in the past? Breakthrough creativity would merely be a re-assembling of what we already know. Yet breakthroughs, by their nature, are unpredictable; whereas algorithms make people rich by being predictable. If breakthrough creativity cannot be fully forecast by past behaviours and beliefs (as many disrupted businesses can testify), then it must come from somewhere other than the past (and our memories of it). The complexity theorist Stuart Kaufmann calls this “partial lawlessness“; a little gap that allows creativity to come out of regulatory. This gap is the paradox of Kurt Godel, who posited that no mathematical system can ever be totally complete or totally consistent. There is always a chink or kink. Ironically, the father of computing, Alan Turing, seems to have come to the same conclusion.
Countless ground-breaking artists — from multiple Booker Prizewinner Hilary Mantel to Isabel Allende; from Ludwig Van Beethoven to John Lennon — have made it adamantly clear that they have never been able to predict what creations will emerge next; and indeed, know where they really come from. Additionally, the act of bringing those breakthroughs into the world, usually against enormous resistance from the status quo, is itself a profoundly human talent, driven as it is by narrative, vision, empathy and influence.
For this reason, I am convinced that no computer, no matter how powerful, will ever be able to purposefully innovate an artistic breakthrough like Hip Hop; or a commercial one like Instagram. Breakthrough creativity is fundamentally organic, not algorithmic. Whilst computers and the businesses that run on are breakthroughs; they themselves will never make them.
So rather than using a machine metaphor, even one as elegant as the internet, to understand the brain, I propose we use an organic metaphor. After all, our brain is an organ in a biological organism working to help us survive and thrive in a biological ecosystem. When we see creativity as organic and not mechanic, we begin to glimpse possible ways to account for it, including revelations from quantum biology that suggest some of the functions of our brain may be quantum mechanical in nature... and so conceivably be able to provide us access to all the information in the universe, past or future.
I have spent 20 years investigating how breakthroughs can be created, sustained and communicated. Having led countless breakthrough innovation projects and personal development programs, Breakthrough Biodynamics has emerged. It’s a comprehensive framework that aims to support us all to lead transformative change in human systems (whether within individuals, families, businesses or societies). It unites the latest science, with timeless philosophy to uncover the logic of how discontinuous, non-linear breakthroughs can be created and then sustained, so that our brains or businesses do not return to their historical default settings.
At the heart of it is a “J-shaped” curve, the Breakthrough Curve, that appears wherever breakthroughs occur; from enzyme catalysis and narrative arcs to scientific revolutions and political ones. It may even trace the process of the death and birth of universes within a many-worlds interpretation of quantum field theory.
The Breakthrough Curve may be nature’s blueprint of creativity; but each breakthrough we human beings have is unique to the context it emerges in. Each involves us blending emotion and reason, rule-breaking and rule-making, as we unleash from within us whatever is seeking to emerge in that matchless moment. No machine will ever be able to mimic our peerless organic nature as inherently, inescapably, beguilingly creative.
Follow Nick Seneca Jankel on Twitter: www.twitter.com/nickjankel
The views will be highly subjective due to various reasons. I would say it depends on your sample group and methodology used. I don't think AI can surpass human intelligence even AI can do things in a more effective and efficient ways.
Artificial Intelligence cannot surpass Human Intelligence now. There have been great advances in this area, but the knowledge and current technology is not enough to surpass the Human Intelligence.
Talking about general AI: we don't know yet, probably not with the tools we have right now (according to some of the big researchers - not talking about Musk or Hawkings who love the attention but are both not in the field, yes Musk produces AI, but he's not a technician/mathematician, so he's just guessing)
Surpass us on a very specific task? Definitely, we can see machines beating humans on many tasks (chess, or GO to give two examples which were thought of as too complex for machines not too long ago).
Nevertheless there are some critical points (as adversarial examples) that give humans some leverage on machines though.
Besides that, the only way to beat humans in my opinion is by reinforcement learning. As long as they are data driven they will not become better than the data they are trained on (at max being equally good) and the data is produced by humans. But since we talk about expert level data, even data driven machines can perform better than the average human.
About the future:
Machines are so far just very bad in generalization, but for now the biggest nets, as far as I know, are by the number of neurons comparable with a simple insect - nobody knows what's going to happen if we build bigger ones.
I think that Intelligent machines can be stronger and smarter than human, they can surpass human weakness and poor accuracy, they can be exposed to environment not suitable for human such and extremely high and low temperatures, high radiation, and so on.
But, on the other hand, the human brain is so complicated that its power cannot be limited.
Intelligence alone is not enough, creativity and imagination are needed .....
The problem lies in the question: the notion of intelligence continues to shift over the decades. A 100 years ago a chess-playing computer beating all humans would have probably be conceived of as more intelligent than humans. Not so any more.
Moreover, we don't know precisely what constitutes human intelligence. We are a hyper-social species, so odds are that our intelligence is `skewed' in a social way. That gives us advantages in some areas and disadvantages us in others. There has been quite a bit of research in psychology about the "sub-optimality" of human thinking in certain areas due to these social biases.
So in some respects and for certain application areas machine intelligence is already superior to ours but most of us may not conceive of these tasks as requiring `true intelligence', with what they probably mean `human-style intelligence'. Apart from us being `socially skewed', opposite to most of existing artificial intelligence, there is the other issue in AI of lack of generality that Jonas Goltz has pointed out above. There is a rather recent field called General AI where researchers are trying to overcome that limitation, but it's a relatively recent addition to Artificial Intelligence. Also DeepMind, now part of Google, is very much trying to tackle this problem - have a look at deep reinforcement learning if you're interested.
So summarily there is no satisfying answer to your question first and foremost because there is not clear definition or idea of what precisely constitutes intelligence, let alone human intelligence. In some areas, that is to say for certain types of problems, machines have surpassed us already. But critics will immediately point to other areas where they haven't in order to defend human intelligence.
So as long as we don't fix the yard stick, anyone can claim that machines cannot surpass us simply by moving the stick a bit or transplanting it into a different field altogether.
Furthermore you can easily come up with different notions for human and machine intelligence that will render the two incomparable such that your question will stop making sense.
It would be like trying to compare a shovel with a screwdriver and asking which of the two is more useful. The answer clearly depends on the problem you're trying to solve, but does not make any sense if asked in a general way. If you want to dig hole, the shovel beats the screwdriver hands down. If you're trying to fix a screw the shovel will be useless.
“The real risk with AI isn't malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.
I do not believe that artificial intelligence can surpass human intelligence but then can cause our demise or massive destruction if found in the wrong hands or misinterprets a command. These machines are created by humans and act based on the command of the user/owner. Everything that civilization has had to offer is as a result of human intelligence.
One suggestion by Dr. Armstrong is to teach robots a moral code; He is pessimistic this will work. He is concerned that a simple instruction to an AGI to prevent human suffering’ could be interpreted by a super computer as ‘kill all humans’ or that ‘keep humans safe’ could lead to machines locking people up (www.dailymail.co.uk/science).
I think that more protective/helpful artificial intelligence should be created rather than destructive artificial intelligence or better still develop safeguards like teaching robots a moral code since a Programmer might not know the potential of what he himself has created.
AI has ALREADY surpassed human intelligence in many things. I think you need to refine the question. I prefer, "What are the areas of human intelligence that AI is least likely to be able to surpass in the next 20 years.
Of those there are a number that most AI experts would agree upon but the time scale makes a big difference in the answers you get.
Typically, AI can perform almost all tasks faster than humans. And for many, they can perform the task with far less error. Landing and airplane is one. Playing Go or Chess also.
The areas that now seem most difficult for AI and have the worst prognosis include understanding the meaning of language. A task such as read a page and list the concepts covered and viewpoints taken is one where there is not much progress likely.