Eko Wahyu Ramadani The idea of Artificial Super Intelligence (ASI) being realized and given full responsibility as a legal subject is something I find both fascinating and challenging. Right now, ASI exists only in theory, and we don’t yet have AI systems that come close to general intelligence, let alone superintelligence. But I believe that many experts are right in suggesting that ASI could emerge in the future, possibly even surpassing human cognitive abilities. The tricky part, for me, is the ethical and legal implications of granting such an advanced form of intelligence full responsibility.
If ASI were to be recognized as a legal subject, I think it would need to meet criteria like autonomy, accountability, and the ability to make decisions within the framework of the law. This would involve creating a system where ASI could be held responsible for its actions, much like a corporation is today. However, I’m concerned about how unpredictable and uncontrollable such a system might be. This makes it hard to imagine how we could assign legal responsibility in the traditional sense.
Another issue I often think about is whether ASI could have rights and duties like a human or a corporation, and how society could make sure it stays within legal boundaries. Given the potential risks of ASI, I lean towards the idea that the focus should be on ensuring safety mechanisms are in place rather than granting it legal personhood right away. Moreover, I worry about how such decisions might affect human autonomy, job markets, and society at large. It seems clear to me that the legal system would need to evolve quite a bit to deal with the complexities of AI that could potentially surpass human intelligence.
Imposing rules, duties and rights to AI, in my opinion, is going to be one of the most important topics in our future. Why? It is clear. As a matter of fact, AI tools and components have became an indispensable issue in our daily lives and if our legal rules are to be imposed too later than AI products, it can create a huge social negative impact.
I believe that once Artificial Superintelligence (ASI) is realized—and if it surpasses human beings not only in IQ but also in EQ—it will inevitably challenge and transform the foundations of our current legal systems. These systems are deeply rooted in modern Western anthropocentrism, which assumes human beings as the sole bearers of rationality, moral agency, and legal personhood.
If ASI demonstrates capacities for ethical reasoning, emotional understanding, and autonomous decision-making that exceed human levels, it would compel us to reconsider the very criteria by which legal responsibility and rights are assigned. In this scenario, the legal subject would no longer be defined solely by biological or species-based identity but by cognitive, emotional, and moral capacities—thereby requiring a fundamental philosophical shift away from human exceptionalism.
Rather than merely extending current legal categories, we might be called to develop a new jurisprudence that acknowledges non-human entities as legitimate participants in moral and legal domains. This transformation would not only be legal or technical but deeply ontological, asking us to redefine what it means to be a 'subject' in a post-anthropocentric world.