14 September 2012 16 3K Report

As we strive to create embodied artificial intelligence with increasingly greater capabilities, do we have a moral responsibility to set ourselves an acceptable limit to the level of intelligence we want to achieve? Is there a level of intelligence at which it becomes morally questionable to restrict a robot’s movements, to alter its program or to pull the plug? Or is it simply too early to think about such issues at all?

More Verena Nitsch's questions See All
Similar questions and discussions