The Phalanx CIWS is a computer-controlled cannon system. This system, which is deployed on many American warships, is designed at the flick of a switch to detect, track, engage, and confirm kills using self-contained radar. A human operator cannot match the performance of such a device. But the duration of its automaticity is regulated by a human operator. Will a day come when such automaticity is controlled by another automaton due to its superior performance? How many layers of automaticity should be tolerated when fighting a war? Some have suggested using block-chain computer code to better regulate autonomous systems (Husain 2017).

Reference

Husain A (2017) The Sentient Machine. The Coming Age of Artificial Intelligence. Simon & Schuster, New York.

More Edward J Tehovnik's questions See All
Similar questions and discussions