One of the most urgent issues in AI and data is ethics. As AI systems grow more powerful and become a part of everyday life, it is essential to develop and use them responsibly.
It is an important question and a modern-day dilemma. I agree with the point that people need to be more responsible in their use of technology, but I also strongly believe that they need to be educated more on this issue using wide-spread means at a broader level by the governments.
As for the question about bearing the responsibility, I believe the answer to this question cannot be generalized. It would vary from case-to-case basis, just like road accident where you cannot generalize the blame or responsibility to any particular party even if the accident happens at the same location. The person using the AI system would be the first person-of-interest but he/she may not always be blameworthy.
The matter is further complicated due to the interconnectivity issues of AI-based systems on the IoT platforms. In such cases, there is a certain amount of responsibility which is borne by the system administrator who may or may not be the user.
The issue is complex and unfortunately not fixed yet. To complicate the issue there is also the problem with cyber attacks against AI system. The attribution will be even more complex. In general terms, responsibility is on the person/institution using it. However, if the system does not behave as expected (contract of use) the responsibility is on the manufacturer. For military systems, it is directly on the Government of Armed Forces using the system.
It is an important question that every organization is facing today. When an AI system makes a mistake or causes harm, assigning responsibility can be complex. Typically, accountability lies with the people or organizations who design, develop, deploy, and maintain the AI system, since AI itself lacks consciousness or intent and cannot be held responsible like a human. This places a significant obligation on companies to rigorously test their AI, adhere to ethical standards, and implement safeguards to minimize risks. Moreover, policymakers and regulators have a crucial role in establishing clear guidelines to ensure transparency and accountability. Strong AI governance frameworks—including data privacy, bias mitigation, risk assessment, and continuous monitoring—are essential to ensure these systems operate safely and ethically. Ultimately, responsible AI use is a shared responsibility among developers, users, and regulators to ensure technology benefits society while minimizing unintended harm.