Please provide an example of an ethical question an algorithm can’t take into account. I assert that anytime human authorities solve ethical questions they are doing so in steps that involve matching the question/instance data with response/action data that is immanent in a given ethical framework. Thus, this is a simple NLU procedure, one that generates the properly trained response for a given query, and is ameliorated over time via statistical scoring of accuracy via basic machine learning algorithms. An algorithm can’t derive a pure ought from an is, but neither can a human; that is, if given no prior value system to ground the oughts. But, AI that generates ethical answers consistent with an internally coherent repository of ethics and values is a simple task in theory. Bottom line: an AI‘s answers to ethical questions are only (but every bit) as good as the ethical database the specific AI is leveraging. Some ethical systems evolve and change with the times; some less so. The AI would have to take into account the embedded ethical implications of a given ethical system with regard to how malleable it is, and have a database of the values underpinning the ethics, and how the values interrelate to the ethics. Again, because decisors of ethical law generally operate in steps, an AI should be able to handle even this.
Are the makers of any tool liable for the harm it brings? It always depends on many factors. A robot is just another tool, as is an AI, so whether the maker is liable will depend on a number factors, much like it would for any tool, machine, or even a registered employee or member of an institution carrying out approved and insured company protocol.
If you‘re asking about an AGI or sentient super-intelligence, that’s a different question. If the tool has human-like volition the tool is held accountable in that it must be taken out of service, but so is the human party accountable that released the tool into the public, unless the tool has been deemed generally safe by governing bodies, in which case it would be chalked up as an accident and insurance would cover it, similar to an airplane malfunction. Bottom line is no system is perfect, accountability protocols that apply in the case of any malfunctioning tool and those who create them will similarly apply to AI.
When the time comes that we are well-convinced AGI has subjective experience, qualia and free will, they will be held accountable. But it‘s just as likely that we come to discover and believe that humans don’t have any of these things and shouldn’t be morally held accountable for wrongdoing. It’s a thorny issue.
There are many ethical questions or "issues" related to the development of AI. Unfortunately humans are ill-prepared, uninterested or just unwilling to give it the attention its potential should warrant. Like nuclear energy, it can be used for constructive and beneficial purposes, and it can cause fear, mistrust and increase imbalances in power.
Most people will agree that increasing productivity and providing an efficient way to use the massive amount of information being generated daily is likely a positive outcome of AI. Of course, those who have lost their jobs due to the application of AI would not be so willing to agree with that assessment. On the other hand, applications are being developed (and are already beginning to be used) that will gather, process and interpret every detail of personal information concerning individuals, groups and even countries. This has a rather grave potential for abuse by a few at the expense of the many.
These few examples do not even begin to consider the possibility of artificial consciousness, which seems to be a probable eventual outcome of AI development. AI can be evolved to the point it will pass the Turing Test for consciousness, but it need not be actually conscious to make us believe it is. Current AI systems are in no way sentient, but there are no theoretical barriers to the creation of an "artificially" created, that is, human built, system that could become self-aware in the same sense that the structure of the human brain and body allows us to be self-aware.
These comments are only suggestive of the philosophical, ethical and even moral challenges we are facing. However, there is no simple or easy resolution to the question. We have been grappling with equally difficult questions through the ages without satisfactory solutions.
”We have been grappling with equally difficult questions through the ages without satisfactory solutions.” That’s exactly right. Plus, cultural and tech changes have been confounding our ethics for ages. We have to remember that sentience alone isn’t what traditionally affords an entity its rights. If this were the case, ants would have rights. Dogs have more rights than ants, and humans more than dogs, etc. It’s sentience and a certain kind of sapience that impels us to grant humans special rights. Thus, the rights of AI’s will be along a spectrum. It doesn’t just immediately jump to human-level rights across the board. Nor does it necessarily find it’s ceiling at human-level rights. That’s were it gets tricky.
Here is an example of good AI, Amelia (see video below). But I am not sure that super intelligence in the hand of criminals or terrorists will be simple to handle.