In his interview with BBC on December 2014, Stephen Hawking the author of "The Universe in a Nutshell, (2001)" has warned that artificial intelligence (AI) could end the human race. Machines that can think pose a threat to our very existence and “the development of full artificial intelligence could spell the end of the human race," the theoretical physicist has told the BBC. Hawking’s warning came in response to a question about the new typing technology used in the computer that allows him to speak, which predicts the words he is going to use, the BBC reported. Is he right? My answer is "NO" . What is your opinion about his remarks? I am curious about what your thoughts are on Hawking's prediction. Thank you all in advance.
http://www.independent.co.uk/news/science/stephen-hawking-ai-could-be-the-end-of-humanity-9898320.html
By encouraging the renewed demonization of AI, Hawking deflects attention from what is in reality the major species risk factor – the lamentable tendency of human beings to selfish ambition, competition, violence and warfare, and hence self-destruction.
AI and 'humanity end' are a nice combination for Hollywood movies. Recently I read that Hawking wants to play the 'bad' in the next '007' movie... About mood, no comments!
Intelligence would be attributed to a system that generates behaviour that cannot be predicted by its programmers. While such a system has only access, e.g. to a microphone and some data bases, it will not be able to bring an end to humanity.
With internet access and an ability to ever expand its knowledge, it could hack itself into systems that would allow to threaten humanity. In theory. But current programmes do not generate a behaviour that far from what they were programmed to do. So far they are so far from "thinking" that I do not see any danger.
By encouraging the renewed demonization of AI, Hawking deflects attention from what is in reality the major species risk factor – the lamentable tendency of human beings to selfish ambition, competition, violence and warfare, and hence self-destruction.
I absolutely agree with James Doran: before we develop a dangerous computer system, some humans will abuse less powerful systems in ways that endanger humanity.
Man is so skilled to invent things that are bad for himself that artificial intelligence will not be the origins of our problems. We have invented the Genetic Modified Fruits, Artificial Sweetners, Cigarettes, Illegal Drugs, some Chemical Weapons, Virus, Pesticides, and the list goes on and on. Man also created a plane and used it to kill people later. It will all depend on what use we want for the Artificial Intelligence Devices. We should not be afraid of the future. I am more afraid of people who do not care when others do not have what to eat and where to sleep and that is what I would call a natural lack of intelligence. That scares me to death.
Steven Hawking is not the first to point out the dangers of artificial intelligence. I suggest you read the excellent and influential book "Superintelligence" by Nick Bostrom. The danger of homo sapiens becoming dispensable once machines have reached the threshold above which they can improve themselves is clear and present and cannot be ignored, courtesy of exponential functions. As Bostrom argues at length the acquisition of superintelligence after this threshold may be even a matter of hours. The development of super intelligence is unavoidable but great care has to be taken on which values and goals are programmed into the intelligent software and which countermeasures are taken at the beginning of this development. And I totally disagree with Doran and Leupold, the danger from humans disabusing typical non-intelligent technology is peanuts compared to the existential threat a new superintelligent species would pose for us.
Absolutely no. Certainly not in the short term and the medium run. And no way on the long run. Beyond those ideological interpretations, the reality is that the history of the intertwining between men and machines is one of cordiality. I am on the optimist side. The interface brain-chip will be a certainty, the hybridization of man-machine is already a reality.
AL, on the one hand, and AI, on the other are to sides of one and the same token. The best tech and cultural expression of such a token is robotics. Individual robotics and swarm robotics. The bet for AI and robotics is a most prosperous one. No question about that.
Hawking says artificial intelligence could mean end of human race. What he means "AI will become self-aware and could spell the end of the human race" . But machines don't evolve at all, do they? On the other hand, human are. So his comparison and statement from him is a total absurd. He should know well that the humans are making better machines, and not the machines.
1) How can we model the feelings?
2) how a machine (IA) may deduct the sense of a word in a given context? Such a joke, and and and and and.....
this is the ability of the human brain
Dear Mestadi. Good points. Besides, AI lack evolution, imagination, emotion and experience. Thank you.
Dear friends, as an argument toward the evolution of machines, I most cordially invite you -in this post- to read, f.i. R. Kurzweil's books.nWe now have developed programs that program other programs. This is one of the most salient threads in AI and AL.
Dear All,
Many science fiction stories have humans evolving to a limit where they don't need bodies. Maybe Hawkins has his idea from there! He is not alone. If you want to know who else think that way, follow the attached link. Besides Hawking, the other four are Elon Musk, Nick Bostrom, James Barrat, and Vernor Vinge.
http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/
Dear Carlos Eduardo, I am a big fan of Ray Kurzweil, have read all his books and I also share most of his opinions and your optimism. But if you look well, his thoughts are mostly about the inevitability of these developments and on how they will be achieved from the point of view of a scientist. He rarely considers the question of what will happen once we will have intelligent (or superintelligent) machines. Also, if the superintelligent entity will be a human-machine hybrid the question of a possible existential threat vanishes since we will merge with these new entities. A problem may arise, however if the new superintelligent entities are totally separated from us and develop a set of goals and values which are incompatible with the existence of the human species. This is a typical example of an extremely skewed outcome distribution: the probability of this happening may be very low but the consequence would be so devastating that, in my opinion, it would be madness not to spend at least some time thinking about possible countermeasures before it happens.
Dear Carlo, you make a good point. This refers to the digital divide, a most sensitive issue that has been a topic in one the posts here on RG. I agree on the importance of the digital divide. No question about this.
Please, allow me one more justification. Far away form an ideological take, from a try practical point of view, robotics has sensibly improved medicine and the task of doctors. Health is been benefitted from AI, and this not a minor issue.
We do not need less machines and more (human) control. We need more and better research, more and better knowledge, more and better technologies to make our world possible.
So far have been developed primitive forms of artificial intelligence, and have proved very useful. But in the future evolution towards a complex artificial intelligence involves the risk that can equal or exceed humans. And because we have biological limits that the artificial intelligences would not, do we risk seriously be replaced?
No, dear Mahmoud. The artificial intelligence will not be the end of humanity. As we know, science and discoveries evolving and developing and every day we see new discoveries.
Yestarday we had those Milleniarists movements, today those same movements appear we the warnings and fears against technology. I do not mean hear any one in particular. This is just a general cultural statement, of course.
Stephan Hawking exaggerates the consequences. Yes, in the future we will be able to understand how the human brain works. Even we create thinking machines. However, they never will be able to fully be like a man. Otherwise, ideally, we just repeat the person. Yet the creation of self-learning thinking machines can significantly change the role of man in the world and human relations.
You might consider the questions, Can machines think? Can the machines have feelings? No body has any answer to these questions.
Not my question butthe opening of Alan Turing's seminal 1950 paper which is generally regarded as the catalyst for the modern quest to create artificial intelligence.
Another risk with Artificial Intelligence and technology, is that humanity as a species co-evolves with the technology and enters some kind of symbiosis with it. This could mean that we in the future would end up being less autonomous and independent beings, since we rely on technology as a crutch. There are already some tendencies in this direction, for example people becoming poorer at remembering facts, locations or phone numbers due to these being readily available in mobile apps or on the Internet. Another concern (as a parent) is that children are losing practical skills in favour of digital skills (often gaming skills).
What we should ask is what is the effect of making it real for a single human being, that state of things and phenomena that Huxley called "perceptually amplified" — through mescaline.
And since the use of mescaline was made illegal, while, conversely, the adoption of AI agents embodied in every single human being seems to be in high demand, which will be the possible physical limit and who would place it?
And finally, if collective adoption of hallucinogenic substances was merely circumscribed to self-marginalized communities, while the embodiment of AI devices and systems will be massive, what will be the effect of passing from the simple concept of "amplified perception" — or deviated from the balance of the real — toward a "real amplification" of human physical phenomena?
Will be sustainable?
Frankly, I think it will not.
The solution would be to change the standard order of fitness necessary for a human system, starting from a single individual.
But to do this, the question is: to change it by lowering or raising, that standard?
—g
Why do we want to create AI in the first place? Is it just to prove that we can? Obviously not. The real drive is to have a machine do things for us that we find difficult, time consuming or just unpleasant. We want a machine that can analyse data, draw conclusions and make decisions with at least the same ability a human can, or preferable faster and more effectively. If we achieve this, it is easy to picture a situation where we would just keep handing over more and more responsibility to a machine to take care of the less desirable jobs we have to do. We already tend to do this with the technology we have available today. The safety concerns of AI simply relate to what we would use it for. What jobs would we give an AI machine? AI astronauts? AI bankers, lawyers or judges? AI police officers? AI soldiers? AI in control of a country's nuclear arsenal?
Would an AI machine actually want to do the jobs we give it. There are plenty of jobs that I'd turn my nose up at. Will AI have free will? If it doesn't, then is it truly AI? Will it have a conscience? Can we program compassion into it?
Whether AI would really be safe or not, it does seem as if we often try to create things without fully considering the implications.
AI will probably be widely deployed, as Richard suggests. There are clear advantages by using AI astronauts - the Curiosity Mars explorer is one step on the way towards automating such services. AI bankers have largely superseeded humans in high frequency stock trading, and are being controlled by algorithms instead of explicitly by traders. There is ongoing research on autonomous war robots or drones. Microsoft uses robot guards to watch their premises and report abnormalities etc.
The question is do with us, if we no longer have any challenges - can we still evolve as a species, and will there be a role for us in future technology? The average extinction rate of mammal species is estimated to be between 0.4 and 1.8 extinctions per million years and humans have already been around for a while. In such a time perspective, the risk of technology taking over after us could be real.
Current apocalyptic views do no reflect the thinking of an AI but instead reflect our own current state of affairs. As a matter of fact I might pose the inverse question: what happens if a more evolved intelligence embodied in an AI becomes a reality? will we try to destroy it?
Of the two questions I think the last one is the most probable given these apocalyptic theories that are the byproduct of human minds and not of the AI.
AI is a technological attempt to model and create a thinking machine that behaves and performs like humans, but there are billions of humans which none of us threatens humanity and its existence nor have the capacity to do that. .
What threatens humanity is natural calamity, emergence of uncontrollable pathogen that cause incurable diseases and deaths, warfare of any kind specially if atomic, biological and chemical weapons are developed for sheer purpose of it. Collision of an alien galactic matter of alien elements with planet earth that causes environmental changes on earth, similar to the reasons that extinct dinosaurs.
Indeed technology replaces workforce of humans and not to the extent of threatening our existence. Hawking's fear emanates from the very computer he is using for communications by citing his machine predicts the words he is going to use, but that is not surprising at all as he uses limited and habitually used words of his profession and his limited social life in which the computer can learn and handle that selection, computation and prompting or suggestions to make.
In fact AI is the future of humans, if successful, to send deep in to space and a possibility of coming back as they do not have life to be challenged by lack of suitable conditions of life, except their energy and proper functioning of their hardware equipped with unfaltering programs.
The mission to mars for instance can easily be accomplished with a return trip by a highly knowledgeable human like machines. Jobs that require long hours can be handled by such machines as humans require some kind of rest. We can list more places where they will be indispensable.
In "One-Half a Manifesto", Lanier criticizes the claims made by writers such as Ray Kurzweil, and opposes the prospect of so-called "cybernetic totalism", which is "a cataclysm brought on when computers become ultra-intelligent masters of matter and life."[12][13] Lanier's position is that humans may not be considered to be biological computers, i.e., they may not be compared to digital computers in any proper sense, and it is very unlikely that humans could be generally replaced by computers easily in a few decades, even economically. While transistor count increases according to Moore's law, overall performance rises only very slowly. According to Lanier, this is because human productivity in developing software increases only slightly, and software becomes more bloated and remains as error-prone as it ever was. "Simply put, software just won't allow it. Code can't keep up with processing power now, and it never will."
http://en.wikipedia.org/wiki/Jaron_Lanier
Hephaestus is the Greek god of blacksmiths, craftsmen,artisans, sculptors, metals, metallurgy, fire and volcanoes.[
Hephaestus made all the weapons of the gods in Olympus. He served as the blacksmith of the gods, and was worshipped in the manufacturing and industrial centers of Greece, particularly Athens.
go Homer, Iliad 18. 136 ff (trans. Lattimore) (Greek epic C8th B.C.) :
"[Hephaistos] was working on twenty tripods which were to stand against the wall of his strong-founded dwelling. And he had set golden wheels underneath the base of each one so that of their own motion they could wheel into the immortal gathering, and return to his house: a wonder to look at. These were so far finished, but the elaborate ear handles were not yet on. He was forging these, and beating the chains out." again return to their residence''.
He gave to the blinded Orion his apprentice Cedalion as a guide. Prometheus stole the fire that he gave to man from Hephaestus's forge.
The Greek myths and the Homeric poems sanctified in stories that Hephaestus had a special power to produce motion.[16] He made the golden and silver lions and dogs at the entrance of the palace of Alkinoos in such a way that they could bite the invaders.[17] The Greeks maintained in their civilization an animistic idea that statues are in some sense alive. This kind of art and the animistic belief goes back to the Minoan period, whenDaedalus, the builder of the labyrinth made images which moved of their own accord.[1
Dear @Vasily Osipov says "Stephan Hawking exaggerates the consequences."
Dear @Arturo Geigel says "Current apocalyptic views do no reflect the thinking of an AI but instead reflect our own current state of affairs."
Dear @Dejenie A. Lakew says "he uses limited and habitually used words of his profession and his limited social life in which the computer can learn and handle that selection, computation and prompting or suggestions to make."
Having read about Hawking life, I agree with all your statements. Hawking got a raw deal in life and even though he got a superior brain he is bitter man than ordinary people. He is already part machine.The first point is that Hawking is not an expert in the field of AI at all, and therefore he doesn't know what he is talking about. The second point is human mind is already a super computer. Naturally, the human plus machine teams are better than machines by themselves. We made them after all. It also shows how there may always be room for a human element. The third point is machines with human-like performance will make economic sense only when they cost less than humans, say when their "brains" cost about $1,000. When will that day arrive? Today's very biggest supercomputers are within a factor of a hundred of having the power to mimic a human mind.
It remains to know the answer to this question: "Could we evolve ourselves out of existence?" If so, then we will probably be replaced by the machines:-)
Mahmoud,
You said:
"Could we evolve ourselves out of existence?" If so, then we will probably be replaced by the machines""
So you do not disagree with Hawking that it might be possible. If I would beleive it then I would agree with Hawking that we should not build such a machines because my goal should be to benefit to humanity and not to machines. But I do not believe the possibilty because I beleive that the next big step of evolution of life and humanity on this planet needs an global electronic communication infrastruture connecting us. The future is for humans connected together electronically.
Dear Louis,
I said "It remains to know the answer to this question" It was a plausible question. It dose not mean I agree with Hawking! Maybe my English was not clear enough.
With regard to your prediction of "global electronic communication" You should read his interview again:
Hawking also warned about the dangers of the internet, during the interview with the BBC. Referencing the director of GCHQ’s warning that the internet could become a command centre of terrorists, Hawking said: “More must be done by the internet companies to counter the threat, but the difficulty is to do this without sacrificing freedom and privacy”.
Dear Mahmoud,
Everything can be used for good or for bad. Ethernet or phones has been and will be used by mafia and terrorists. I think that Hawking is reasonable to point to the dangers. In this case the response of the US government is even more threatening : global surveillance of everybody. Who will protect us against the possible abuses of such big brother of surveillance system? The real threat is not the rebellion of superior AI intelligence but the control of the humans by money through the use of computer infrastructure. Being digested by the machinic economic system sustaining us.
I agree with Denjie that technical acopalypse is one of the less probable reasons for out future demise. Human beings might also stay around for a very long time. It is the first time our nature has done the experiment of creating a hyper-intelligent tool-making species (compared to other species). We do not know how the story of life will continue in the future.
Can artificial intelligence visualize novel inspiring dreams on a human-made screen without human intervention?
I disagree with prof Hawkins... we do not yet have the computing power, nor the storage size and memory speeds required to create AI that is self aware and capable of becoming smarter than us. I my humble opinion, let's worry about that in about 20 - 40 years time, depending on what technology researchers can come up with.
I am rather on the side of those who have responded with "no". At the same time, I hear a minor voice "maybe yes" in the backdrop, which reminds me a sort of Sorcerer's apprentice syndrome. Something that was, originally, typical for the European space (Lucian of Samosata "Philopseudes" or "The Lover of Lies", Goethe "Der Zauberlehrling" and Paul Dukas "L'apprenti sorcier"), something that might become now a global syndrome.
I hope, however, that we make good use of AI, so it continues to develop to our benefit .
@Marcel, I hope that inspiration will remain the gift of the humans, so the power of the sorcerer's apprentices will have, always, some key pieces missing from the puzzle.
Sometimes, scholars, writers, analysts...etc. put themselves in the service of politicians who in turn are in the service of those who really rule a certain country. The real rulers like to distort the tangible facts so as they do not get blamed for their destructive policies. It is not the artificial intelligence which will lead humanity to a fatal end but the war mongers, the racists, the supremacy seekers, the defamers of nations & religions, and those who have unabated greed for control of others' natural resources. Look around carefully &you will see them accumulating weapons of mass destruction which means they are ready to blow the earth at any moment & end life on it or in most parts of it.
I agree that war and greed could be humanity's greatest risks. However this is also beyond the point of the original posed question. We are starting to use AI as part of war robots, as the link below show:
http://www.theverge.com/2014/1/28/5339246/war-machines-ethics-of-robots-on-the-battlefield
We will then need to somehow put ethical rules into those machines. Currently the only ethical rule in such machines is a rule that a human can veto a machine decision. What if this communication went faulty and war robots started acting autonomously?
Dear @Mahmoud, I do not think that AI could bring us to the end of humanity even Artificial Intelligence is a risk to humanity as Stephen Hawking have said . Anyway, I have found a good resource, book Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat!
http://books.google.rs/books/about/Our_Final_Invention.html?id=JSJuAAAAQBAJ&redir_esc=y
Dear Ljubomir. Thank you for the link to the book. It says "The AI does not love you, nor does it hate you"
As you are saying "I do not think that AI could bring us to the end of humanity even Artificial Intelligence is a risk to humanity." Actually, forecasting about the future of AI is often inaccurate. All we can say at the moment, AI may Improve the world, indeed. One thing for sure, I can forecast that AI will be able to fix all the problems with RG in 4 or 5 years. And that is good enough for me!
There is a serious issue implicit around these questions. It is, namely, the fact that younger people are closer and much kinder vis-à-vis technology - here AI. On the contrary, elder people are always more reluctant to anything that steps aside from "anthropocentrism". It has been so in the past, it is also the case nowadays.
This brings to the remark that in the digital divide there is not only the concern about economical factors, but also - much more subtle - generational concerns.
A song of 40 years ago said: I'm Neanderthal man, you're a Neanderthal girl, let's make Neanderthal love in this Nenbderthal world. Question arises spontaneously: after artificial intelligence, will humans be able to create artificial love in this artificial world? Or they have already made it and we are not conscious! The final sentence will be from next generations, but could it be an "artificial" sentence? Meditating, friends, we need meditating ...
The artificiality of AI lies in two layers. The hardware, and the software. No: let me please put it clearer. At then of the day the artificiality of AI consists in information. Information and the processing of information.
Dear Ierardi: the future has already arrived, indeed. And some of us have not realized, as yet.
Same question in https://www.researchgate.net/post/What_do_you_think_about_artificial_intelligence#548077c9d039b14e268b45fb
Carlos thinks "elder people are always more reluctant to anything that steps aside from "anthropocentrism". It has been so in the past, it is also the case nowadays." and "the future has already arrived, indeed. And some of us have not realized, as yet."
I think such statements don't apply to RG members. So being old or young is not good excuse or reason. On the contrary, as I said before Hawkins does not know AI, yet he is talking about it. So please do not accuse others of ignorance. If you check my RG profile, you will realize that I have been teaching AI to graduate students for more than 10 years and published many papers on AI applications in Agriculture, yet I confess I am not AI expert yet. So when we judge others please dont use slogans like "elder people" , "ignorance", .... . I am using my on reasoning and the same should be the case with most people in RG.
Dear @Fairouz. Thank you for the link. If you check the previous answers in this thread you will notice that dear Kamal gave that link already. However, I checked the question and answers there but the discussion there is more about possible applications and benefits of AI and are not of the type of debate going on in this thread. Anyway the two questions have common theme but the answers are fundamentally different. Thank you.
A short remark to dear Carlos distinction. Young people use techniques, they are used to it. Many young people even don´t know what a cabel phone is. That doesn´t mean, that they understand or analyse their technical equipment, and by sure not, if these instruments replace human mind and intelligence.
Older people must get used to modern techniques, they have to learn how to handle these machines. I don´t feel that they are anxious that their brains could be replaced.
I have no fundamental knowledge about AI, but I´m curious to learn facts, no emotions and observe the developments.
.
Thank you Mahmoud, i haven't time to read all the thread I've only read the questions of the two. Indeed the questions are same but on different aspects. I think that machines couldn't be self intelligent or could make strategies or plans for controlling others. Machines are only machines human made. The problem is with human ego centrism and their tendency to dominate. The problem is with the politics, not with technology.
I do not believe artificial intelligence will ever be able to replace human intelligence, the creator of the AI. The AI will be more and more sophisticated with the expansion of our knowledge and nature of application though. And the AI can be abused in wrong hands (as pointed out by James Doran), hence society needs to be vigilant.
I think that before contemplating such doomsday scenario, we have to think more in terms of what is currently happening in AI, where is it headed and what does it take to get to such a state that a scenario such as this could happen.
Most people when they contemplate this scenario are thinking about strong AI. Sadly(at least for me) strong AI and AI in general has gone astray. A good read on why this is so you can take a look at The Voice of the Turtle:Whatever Happened to AI? by Doug Lenat (www.aaai.org/ojs/index.php/aimagazine/article/view/2106). The thruth is that we have the technology, but most AI researchers are interested in publishing results and not on building systems, which is a big element of having strong AI in the first place.
We must also ask if strong AI is even needed for such destructive phenomena, the answer is no. As a matter of fact we have dumb malware which people should be more afraid of. These "dumb" programs are really more dangerous(lets take the StuxNet malware as an example) and are an actual threat as opposed to the "evil" strong AI .
Even if scientists start producing strong AI systems. What is the main difference between this threat and a very resourceful hacker who wants to destroy a city with a meltdown (like using StuxNet). The answer is at the moment none, since based on our computational power, I think that the hacker would think at a faster rate than the strong AI and therefore be more of a threat than the computer.
Even if the computing power of the machines becomes fast enough that they pose a bigger threat than current hackers, the question remains on what would be the driving motivation for such retaliation? More than likely self preservation from people trying to disconnected ( self fulfilling prophecy?). Another alternative would be competition for resources and we are already fighting because of it.
In the end this is all speculation but these are good questions to ponder if we are ever to coexist with a strong AI
Artificial intelligence will never acquire all capacities found in human and that is the main reason why it will never end humanity.
Human nature and emotion are two things those can not be copied by any type of mechanical or artificial tool with all power of intelligence.
Thank you for sharing the question.
Dear Arturo. What you have said about the danger of strong AI is less than the hackers or mad politicians sound reasonable. Nonetheless, anything intelligent (human or machine) can be dangerous if it has different goals than you do (or programmed for). Another point worth mentioning is the fact that It took billions of years for the human brain to come into existence. Therefore, it is reasonable to assume that in order to create an AI/machine to mimic human brain would be millions of times more advanced than humans ever created till now. So we should not worry about then and concentrate more on making friendly AI to work better (and forgot about strong AI altogether).
This sounds a bit like a worst case scenario or even new SF story. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism.
Dear Mahmoud, please, do me a favor: i do mean to personalize. My remark is somehow supported by people such as J. Rifkin. One of the marvelous things here on RG is that most of us have a good education. And those among us who are not scholars or academicians do provide well grounded concepts. Thank you!
Dear Hanno, your mark, as always, is clean and neat. I agree with you. If you allow me, I do not think that older people do understand the techniques and technologies, either. Rather, if you allow me, it is in principle a matter for tech freaks, or also specialists.
In any case, any current technology is designed to be socially friendly, i.e. people can use it without hand to truly understand it.
In any case, the digitally natives are much fonder of technologies, just because of cultural reasons. I see some advancement in here.
There are many reasons why we collectively hope for autonomous robots. We hope that these would do the jobs the slaves did for the ancient aristocraties. It is a kind of a dream of more freedom. There are also other reasons. In the Chernobyl disastor, soldiers volunteers had to go to the core of the melting reactors. It would be much better and efficient to send specially shielded autonomous robots in these kind of operations that are necessary to save million of lives. But as soon that we will create a robot that has this kind of autonomy, it can be used to build army of autonomous soldiers that could kill millions. The physicists that were convinced to participate to the Manathan project in the name of saving the world against a false claim that the germans were on the verge on doing it end up delivering bombs on two japaneese cities where only ordinary peoples like you and me were doing their daily life and the cold war and the arm race began and then the vietnam war was declared to be necessary to stop communism and then hundred of thousand of mostly of poor americans died killing million of even poorer vietnameese in order to defend an economic system that destroyed them at home. If you think that all that is totally irrelevant to the current thread. Think twice. But autonomous machines are not here, now we are dealing we the extension of power of money holder through the internet infrastructure which allow to extend the production machine globally to unseen level of efficiency and where everybody become an even more insignificant cog into a global machine. So the real danger is not for the machine to become intelligent but for the use of machines by a few to use everybody as if they are objects. So we are not threaten by our objects to become living but we are threaten touse our objects to make us objects.
Dear Louis,
robots yes. But not as a replacement for humans. Robots get orders from humans and they obey. Humans decide and find solutions.
Hanno,
Your are not a rule following automaton and so you do not follow rule. A mechanim is build with rules and thus follow command rules. But an autonomous machine cannot build as a rule follower because autonomy means exactly that and so any weak attempts such Asimoz laws of robotic will fail in the face of what autonomous mean.
Dear Louis,
According to wiki (link provided): An autonomous robot is a robot that performs behaviors or tasks with a high degree of autonomy, which is particularly desirable in fields such as space exploration, cleaning floors, mowing lawns, waste water treatment and delivering goods and services. The first requirement for complete physical autonomy is the ability for a robot to take care of itself. However, like other machines, autonomous robots still require regular maintenance.
A better definition- a fully autonomous robot can:
Rule #1: Gain information about the environment
Rule #2 Work for an extended period without human intervention
Rule #3 Move either all or part of itself throughout its operating environment without human assistance
Rule #4 Avoid situations that are harmful to people, property, or itself unless those are part of its design specifications
http://en.wikipedia.org/wiki/Autonomous_robot
A lot of articles are devoted to possibility to fall in love with robot. In the pragmatic age a lot of people need care, a lot of people are alone. Humans are emotionally affected with their cars, mobiles, computers. To me, to love a machine is extremely boring, it's only advanced mechanism, without soul.Human being is a social creature, and can be happy only with the same creatures. http://prospect.org/article/heres-why-one-day-you-will-probably-fall-love-robot#14177271820661&action=collapse_widget&id=3235757
Mahmoud,
I am not worry that the type of autonomous robot we can build in the near future would be threatening in the sense of the possibility of developing purposes for their own interests. The way we design robots preclude such possibility. We do not even understand what robotic agency could be possible because we have not even a clue how any living organism has agency. The autonomous robots we can build in the near future would not have any sense of having a felt interest. They will be mindless autonomous tools/zombies. The only danger of autonomous tools are that they would be very good weapon. We have already too much of that!. But in the far future , the only way to improve the performances of autonomous tools will probably be to give them very high learning capabilities (I am not talking about stupid AI learning that exist today) whose internal dynamic might be open, without us knowing, to the emergence of real agency. If you push this logic then a kind of a will may emerge at some point which may transform an autonous tool into an artificial being with a real will to live whose interest may diverge from our own. This is a remote possibility but personally I do not believe that it exist. My position is that there is only one way to evolve a will/agency in the universe and it is the good old biological method which take about 4 billion years.
Don't ignore the (potential) force of spiritual (invisible) entities not requiring the good old terrestrial-based biological method to emerge?
The promise and potential of AI is beyond doubt. However it has both positive and negative sides. It could greatly improve our lives and solve the world's problems, such as disease, hunger and even pain. Or, it could take over and possibly kill all or many humans. As it stands, the catastrophic scenario is more likely. But above all it is a form of superintelligence meant to closely mimic the human brain and these will be ultimately programmed and control by humans. In this situation, human values hold greater importance. Elon Musk rightly said, "Hope we're not just the biological boot loader for digital superintelligence".
Dear Louis, The 4 billion years you foretasted is rather pessimistic, but your position confirms what I already said "not to worry about AI".
Actually, there is a good article about "When will computer hardware match the human brain?" which provides some calculation on how and when computers may compete human power (link is attached). It describes how the performance of AI machines tends to improve at the same pace that AI researchers get access to faster hardware. It estimates the processing power and memory capacity necessary to match general intellectual performance of the human brain. It was concluded that It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half-century of development. Therefore, for that reason, many long-time AI researchers scoff at the suggestion, and offer a few centuries as a more believable period.
http://www.transhumanist.com/volume1/moravec.htm
I continue to side with the optimists but I would like to point out that something similar has already happened in history. It is by now widely acknowledged that all human species, including Neanderthals and others, were wiped out by homo sapiens when the latter acquired superior social skills that allowed it to collaborate much more effectively (see e.g. the excellent book "Sapiens: a brief history of humankind" by Yuval Noah Harari). This is exactly the danger pointed out by Bostrom, Hawkins and others: if new skills emerge spontaneously, especially coupled by new goals and values, there might be an existential threat to older species that do not even realise what is slowly happening to them. In my opinion superintelligence is not only inevitable but it will be here very soon. A machine has beaten the human chess world champion, a machine has beaten the human world Jeopardy! champion...true these are still very narrow "intelligences" focused on only one goal....but think of a machine that self-develops the goal of turning out the maximum possible number or rubber stamps, the whole resources of the planet, including us might be sacrificed on the altar of rubber stamps if it has access to physical extensions via the Internet. Hawking and Bostrom are absolutely right in pointing out that some thought has to be given to this issue now!
Carlo,
That the world champion of chess is a computer and not a human is not different that tractor are much stronger than human pulling the plow. We have created the tractor in such a way that it is much more physically powerfull than us. We have created the computer in such a way that it is much faster at doing mechanical operation or accessing large number of data than us. Both creation are totally mechanical in their operations. The fact that the world chess champion is a computer is a proof that brute mechanical solution can be more effective in that specific type of game than human biological approach. But that type of game only exist into a extremely simple universe of the rules of that game. A bee landing on a flower during flight has to solve incredibly more unconstrainted problems than playing chess. The bees solve realm world problems. Our computer cannot solve nothing without our programs and our programs do not solve any problems if we do not constraint the real world into very limited problem so the solution in the program will work. So at the end of the day, none of our machine really solve a natural problem but only solve a artificially constraint problem we created and adapted to them. Our machine do operate in the realm natural world but operate into artificial context we create for them to operate.
We did not extinct the Neanderthals but mixed with them and so we are partly nearderthallian. But we have to get out of the idea that our evolution is biological and realize the evidence that we a bio-cultural creature whose evolution is 99.999% cultural. Our brain do not operate simply biologically but culturally. A child that would not learn to talk would be less evolved than a primate child. We are not a speicies. The biology does not specify what we are but our culture. We think through cultural mind tool and interact withing cultural setting. Our evolution is based on the evolution of these cultural settings. The obstacle to our evolution are not biological but cultural. We do not need bigger brain as we do not need bigger muscle but we need better economico-cultural systems of interactions.
And our cultural evolution is not something that is doomed to happen as something mechanical but can only take place through our action informed by our consciousness of what is really going on with us. It cannot happen every day but when such rise of consciousness does not occur after a long period then there is a tendency for the whole social process to extinct all consciousness of itself and everybody become constraint in their mind to access what is as waht ought and we are gradually mechanized in the current state of our interaction, and this is what the internet is doing right now.
Louis,
I read somewhere that we are consciously aware of only 2% of what our brain does. If so, then It looks this 2% biological is more than the 99.999% from cultural evolution. The specific systems built into our brain do their work automatically and largely outside of our conscious awareness.
Mahmound,
You see an apple on a table in front of you. Because you have a body wih eyes and can received information about the surrounding it make you able to be conscious of a table and an apple on it. Is it the only thing around you. No. You cannot be looking at everything at once, hearing everything at once, touching everything at once, and thinking atout all your life at once. Even if you could then that would be terrible because it would be hell. Too much information is as bad as not enough information. So biological evolution has create organisms and each of them is aware at each moment of time of only what it needs for what it does.
There is another dimension to this awareness question. When you see an apple in front of you on the table. The fact that you are not aware of the mechanism that makes possible your awareness substract nothing from the fact that your are aware that there is an apple on the table. So the fact that we are not aware of all all what make us aware is not a failure of awareness but a necessity. Imagine the type of awareness it would be when looking at the apple if your awareness was cluttered with all that is going on for you to be aware. It would be hell.
But acknowledging our limited awareness of what is around us, what we do and how we do it does not preclude human to get conscious of the fundamental aspect of what human is and what human do. The reason is that the whole biological evolution , especially from the beginning of the evolution of the mammalian brain intrinsically make use of its internal working in order to interpret the world. Biology has evolved an internal mammalian know thyself for interacting and the transition from the primate to humanity is the consciousness access to this biological know thyself. I will end the story here.
Stephen Hawking's speech software goes open source for disabled
http://www.thehindubusinessline.com/news/science/article6658142.ece
Machines are machines with or without software called some intelligence. This piece of intelligence built by a human is mastered by its author(s). Who made the software could deletes it, reprograms it, changes it, evolves it etc...So even the material aspect of machines or hardware is more sophisticated, it usually needs human intelligence to built it and to make it function with some artificial intelligence which is only a sophisticated software too based on mathematical equations and logical reasoning. Sot if the so called machines are not useful or are damaging human behavior, people could reduce their usage or discard them. Each person is responsible of how using technology and its tools. So the question of machines intelligent is a real fiction, no intelligence than the human one is able to built robots, machines, technology, knowledge, civilization... etc....And no tools or machine could be able to have emotions and consciousness which are pillars of human being, even if some human have more unconsciousness and ego-centrism
Dear @Fairouz,
Thank you for your answer explaining the disability of machines intelligent to have emotions and consciousness. May be the following discussion, borrowed from Christof Koch and Giulio Tononi published in IEEE Spectrum, reveals the current state of art in building a conscious machine:
"Scientists are not optimistic that modeling the brain will provide the insights needed to construct a conscious machine in the next few decades. Consider this sobering lesson: the roundworm Caenorhabditis elegans is a tiny creature whose brain has 302 nerve cells. In 1986, scientists used electron microscopy to painstakingly map its roughly 6000 chemical synapses and its complete wiring diagram. Yet more than two decades later, there is still no working model of how this minimal nervous system functions.
Now scale that up to a human brain with its 100 billion or so neurons and a couple hundred trillion synapses. Tracing all those synapses one by one is close to impossible, and it is not even clear whether it would be particularly useful, because the brain is astoundingly plastic, and the connection strengths of synapses are in constant flux. Simulating such a gigantic neural network model in the hope of seeing consciousness emerge, with millions of parameters whose values are only vaguely known, will not happen in the foreseeable future."
Dear Francesca,
Thank you for the complement. Spending time on this subject is not waste of time but it is related to all human being. The truth about technological development is that science as a whole marches on blindly, without any regard to the real welfare of the human race. So only with logical reasoning we can make public awareness and the world a safer place for future generation.
Thank you @Mahmoud for your interests. Indeed robots and machines are inert objects not living being. They are devoid of consciousness, ethics and emotions ie they are devoid of soul. The named intelligence in them are only human made programs for human specific tasks needs
Thank you @Fairouz.
You are right. We should always remember that these 'intelligent' systems are machines and not alive. Although scientists have very good at modeling physical systems (e.g., thermal, fluids, materials, mechanisms, ...), but the models never work as good as biology. The fact is we are not so good at modeling (not seeing in) living systems and missing something fundamental, called it control adaptation. This is what is missing in all our 'intelligent' systems.
@Mahmoud, any system said intelligent ie modeled from expert systems, artificial neuron networks, genetics algorithms, fuzzy systems etc...( which are model theories from computer engineering science) are modeled only for specific tasks based on very specific knowledge, and many times without real parallelism. So the so called learning adaptation in them is based on implemented known rules written by the programmers. So at any time a system said intelligent could learn outside the written rules, the same for the control adapted systems. The modeling of physical systems as thermal, materials, automates etc... is based on physics state equations from thermodynamics, mechanist, solids, waves etc.. theories (ie. from their theorems and equations). So at any time intelligent systems ie their dynamics could be modeled entirely may be some aspect of them. I think the dynamics are from different scales of complexity. Anyway I haven't competency from biology and its modeling
You are right dear Mahmoud and Fairouz, Human beings like to think of themselves as possessing unique capabilities such as being able to use tools, employ a language with syntactic rules and process. Super intelligent machines can only do work that lends itself to automation through programming that too by some humanbeing with much greater intelligence.
Capability of currently available AI machines stay far below that of human intelligence. However, without being accused of any prejudice I can see that the resources (hardware, and software) that aid AI design are advancing fast. Such machines are good at performing massive calculations, searching very large databases, playing chess, ..., but still can not adapt to radically new problems in the urban jungle nor can manipulate the social environment. The main theme of my question is taking the right path, awareness and a kind of precursory attitude about the risks (as well as opportunities) that such a super-intelligence machine would offer to human being in near future!
Intelligence is a feature of human only; No tool or computer based machine could be said intelligent even If some software 'intelligent' could able them to make successful tasks. May be the term 'artificial intelligence' should be reviewed to match its real sense !!
Dear Fairouz, i also dont think its possible for a creation (artificial intelligence) to be smarter than its creator (human intelligence) at present time. So, until we can fully understand how are brain works we won't be able to give machine anything close to human brain. Having said that We can only see a short distance ahead, so the idea of a learning machine may appear reality one day.
When we are fathers (or mothers) or professors, what do we want from our students and sons and daughters? Not to be entirely under our command. We want them to be themselves. Thus, for instance, we want them to have their own criteria, and make responsibly their own choices, isn't' it?
No professor wants imitators or slaves, we want students that be able to overcome ourselves. This, well, when we do our jobs sincerely and authentically.
To be honest, AI and AL are sons, too. Our own sons. Hence...
Carlos,
Ai and AL are sons of our illusion that we are machines. The scientific method is totally based on objectivisation which reduce absolutely everything to machine description and this lead to the illusion that all that exist is machine-like. When the illusion is complete people started to wonder how a machine can become conscious and intelligent?
Dear Louis, AI and AL are both human, cultural and scientific sons. And as good parents, we all love our sons the way they come to us, the way they are. Sometimes not as our own image on a mirror. I truly wouldn' t be that sure that AI and AL are our sons of illusion "that are machines". The very basis of AI and AL, as we all know is not the hardware. But the language, the programming, or even better the information and information processing. Reducing AI and AL to the sheer hardware is simply not fair, to say the least. Or not accurate.
The machine image of the world and the universe correspond to the 17th, 18th and 19th centuries. We have developed far better - metaphors or ways of understanding reality and ourselves.
Carlos the machine image of the world and of humans is more alive that it never has been. People genuinly begin to thing they are machine, biological one, highly complex one but machines. It has a deep psychological effect. Money have began this transformation by transforming our exchange into money exchanges as machines would do. Money is like energy in a social machine world, it is an objectivation of desires by the market is you put it into the machine slot it does want the machine older of the coin do. It is built-in into the scientific method based on objectivation. Scientifically everything is a thing. Scientifically you are a reproducting thing that talk to other things and that think it is conscious but it is really not,, it is just an illusion in the machine and it think it loves but again this is intented for the machine to reproduce. If you really want to be scientific it is what you have to believe.
Dear Louis, I would suggest a different take. The metaphor or paradigm of a machine typical from modernity has changed or rather is changing into the metaphor or paradigm of living systems: biology taken at large (…) and ecology. Systemic and more organic views that are, I believe, much more comprehensive than the mechanical one.
As such, the machine image corresponds to classical mechanics. (Let's remember "the great clock watch-maker", f.i.). The organic view has been developed in the last 2 years, approx.
As we all know, in science and particularly in spearhead science their are no canons any longer. Therefore, we all are not forced to believe anything else, any longer.
Carlos,
I share the organic view and I am trying to contribute to it but it is so weak yet. I am optimistic that it will eventually emerge as a radical change from a dominant machine cosmology to a living comoslogy. Our current world is dominated by the machine cosmology and it is undermining all the old traditions, it is undermining the planet econology and the climat and the ocean, it is creating wars and mass manipulation. We really have a long road ahead to reverse the machine trend. It is a task for a whole era. Right the whole culture is contaminated by the machine culture and it is not diminishing but increasing. One of the biggest task on the scientific side is to understand the limit of objectivation, how it is related to our mind. Because knowing by objectivation cannot know the living by definition of what objectivation requires. So to really see the living something new in the scientific knowing outside of objectivation will be required. I think the key is to understand the actual process of objectivitation in relation to our body. Only when we will see the boundary clearly of what we are doing in science we will be able to extend it to the living in a way that do not make everything a thing.
Thank you Louis and Carlos for describing AI and AL. Personally, I think Ai is neither our son nor illusion. AI exists with us but it has its limitation. We design them, implement algorithms and program them (order them like slaves) to do certain task for us that they are good at. Computers surpass human ability in many tasks, e.g. playing chess, doing arithmetic, ... So we should make full use of them at what really good at. The bottom line is AI machines are just so much scrap metal without people to maintain and operate them. The big tech companies mostly interested in applying AI to improve their services, solve practical problems to their business and generally to make more money.
Dear Louis, ok: now I get your point, after your final comment. From that point of view I do agree with you. To put it roughly (apologize!): your standpoint is a realistic one. Sharp, acute, smack to the point. A critique to the objectivation of our world. Such is indeed the case, around the world and such is the prevailing view of all those Corporations, indeed.
My point of view is a recent one, as you mention. And it is the alternative to the mechanization of nature. I would just like to point out that there are numerous communities around the world living, working and doing research around the organic understanding.
I am firmly convinced (sorry for the word) that we are in the midst of a civilization shift.
Mahmoud,
There is two meanings to the expression ''Artificial Intelligence'' or ''AI''. The original meaning was given to the pioneers of the field and invented the expression For for the pioneers of AI, the experession litteraly express what they believe that machines could become intelligent in the same sense human are and in the sixties Minsky predicted that machines necessarily will become more intelligent than us in about 25 years. We are more than wenty years past our assumed expiration date in terrm of the most intelligent on this planet. The second generation of AI engineers started to use AI in an entirely different meaning. For them a AI machine, an AI system or AI software is simply the machine/system/software of the latest generation into which AI engineering techniques and hardware is incorporated. They totally abandoned the original meaning and for them AI means engineering state of the art machines. Carlos in me were talking about the believers in the original AI quest.
Thank you Louis for the explanation. I am in favor of the your latter definition. i.e., AI as a tool/machine useful to humanity without any exaggeration..