Legal scholars are debating the "Law-Following AI" framework, which proposes that AI systems could bear legal duties without personhood. The challenge lies in ensuring these systems comply with laws without the capacity for intentional wrongdoing.
The attribution of responsibility towards AI depends on their stage of autonomy. autonomous agents as are currently being deployed do not meet criteria for autonomy that bears responsibility. Instead they should be processed as product liability.
Accountability is taking responsibility for one's actions. Anyone can be accountable, including individuals and organizations. Think of AI systems as a product i.e. a building or a car. Artificial intelligence Systems are the creation of individuals and organizations. Therefore, the originators or creators will be accountable and not the product itself.
Implementing a "Law-Following AI" that performs legal duties without legal personality presents significant governance and compliance challenges. First, the transparency and reliability of the information entered into the system are crucial: without clear data input methods and developers with adequate legal training, there is a risk of unlawful acts, whether intentional or not.
Furthermore, the dynamic nature of the law requires constant real-time updating, taking into account legislative, regulatory, and case law changes. In this context, it is essential to establish robust governance processes, ensuring that the AI accurately reflects current legal understanding.
Finally, the "human in the loop" principle must be maintained, allowing for review and oversight of AI decisions. While technology can increase the efficiency, consistency, and speed of analysis, human judgment remains essential to ensure legitimacy, accuracy, and ethical compliance in legal decisions.
When we talk about a "Law-Following AI" that operates without legal personality, the risks are real and must be carefully considered:
i. Illegal acts, intentional or not – AI can make erroneous decisions if data is incomplete, biased, or manipulated.
ii. Lack of transparency – We often don't know exactly how the machine reaches a decision, which makes it difficult to fully trust the system.
iii. Updating regulations – Laws change constantly, and an outdated AI may act based on rules that are no longer valid.
iv. Developer training – If those who build or feed the AI lack adequate legal knowledge, the system can generate serious errors.
v. Legal liability – Without legal personality, AI cannot be held accountable; responsibility falls entirely on individuals or organizations.
vi. Overreliance on technology – Overrelying on AI without human oversight can lead to decisions lacking the context and critical judgment that only a human can provide.
vii. Ethical and social issues – machines don't perceive social, cultural, or equity nuances, and this can affect the fairness or legitimacy of decisions.
viii. Cybersecurity – digital systems can be targeted by attacks or manipulation, compromising important decisions.
ix. Audit difficulties – tracking how and why AI made a particular decision can be complicated, hindering oversight and control.
x. Impact on trust and reputation – AI errors or failures can undermine public trust in the institutions or companies that use it.
In conclusion, the idea is that AI can be a powerful tool, but we cannot ignore the fact that we still depend on humans to oversee, interpret, and ensure that decisions are fair, safe, and trustworthy.
Here's a provocative question: "If an AI can follow the law without ever having consciousness, judgment, or responsibility, to what extent can we fully trust it—and where does technology end and the irreplaceable need for human insight begin?"