08 August 2015 80 4K Report

Stephen Hawking has said: "Once humans develop artificial intelligence, it would take off on its own and redesign itself ... The development of full artificial intelligence could spell the end of the human race." Elon Musk shares a similar view, saying: "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that ... We are summoning the demon."

Ray Kurzweil, author of five books on artificial intelligence, including the recent New York Times best-seller How to Create a Mind, is (of course) in the vanguard of folks pooh-poohing concerns of such great thinkers and futurists as Hawking and Musk. He believes the benefits to be gained for humans from AI far outweighs any risks, and that we are smart-enough and ethical-enough to overcome all the risks.

What is your opinion? 

More Bob Skiles's questions See All
Similar questions and discussions