If a government agency contracts for and establishes an administrative AI system that makes errors, should the government be liable in tort for those errors?
I answer in affirmative, there should be public liability. The administrative sector of the government that deploys AI in discharging public functions should be held responsible for any error arising thereto. An AI is a mere agent of the user and such user should be held accountable for the tools misdeeds as the user enjoys the benefit of the use.
Attila Biró this is an interesting aspect of the public authority dimension! Normally, there is not a contract between the public authority and the citizen subjected to the system. So, without this contract and associated liability clauses, can you still find liability? Or, do you think there would be a claim, but liability would fall on the public authority or private designer based on the contract?
In my understanding of jurisprudence, there should be a public liability in tort for administrative AI. Any original text can be somehow modified and recomposed by an administrative AI system, i.e. the output can differ from the input in many ways (e.g. several authors merged, automatic signatures, clauses and paragraphs over-applied, no final proofreading of person responsible, …). Conclusion: bureaucracy as the reign of nobody can be reinforced, if there is no public liability in tort for administrative AI; such an administrative trend would strongly reinforce autocratic government.