In Dallas, Texas, just a few days ago, the police used a robot to bring down a gunman who allegedly killed five policeman. I am wondering:

1) Should such robots be used in this way?

2) Is it not possible that such robots someday may turn on their human "masters", despite Asimov's three laws?

3) Self-driving cars are now being tested and in February one such car caused an accident. How will these developments  evolve, do you think?

 I ask these questions as I have previously worked in AI (Artificial intelligence) and Machine Learning and have a view on these technologies which are increasingly being used in robotics and wonder what others out there think...

Similar questions and discussions