There is fear of the fact that under the 4.IR, there is anticipation of deep machine learning, robotization of criminal conduct and that conduct of human beings will be non-existent. How will this really affect the practice of Criminal Law?
The fourth ongoing industrial revolution "blurs the boundaries between the physical, digital and biological spheres." It brings us changes in artificial intelligence (AI), robotics, the Internet of Things (IoT), autonomous vehicles, 3D printing. In the business world, all existing business models are changing. Changes happen every day. The range of changes is enormous; the pace is breakneck. A whole new way of doing business is being created. Offices, perimeters of protection, security guards are disappearing. The future of work brings new challenges and causes us to change the way we think about jobs and employability. At the same time, new corporate vulnerabilities and criminality are opening up, while the sophistication of attack tools is constantly growing. However, we cannot solve the challenges of the future with the tools of the past. That is why, at the time of the 4th Industrial Revolution, fourth-generation corporate security and an update of the criminal law framework is also needed.
A challenge is for all Sciences u look from the use of information and communication technologies that drive the digital transformation in order to achieve a high level of competition at all costs
Artificial intelligence creates new challenges to various areas of law: from patent law to criminal law, from privacy protection to antitrust law. Among the existing
approaches to date, the most optimal is the creation of a separate
mechanism of legal regulation, which creates a clear delineation of areas of responsibility between developers and users of systems with AI and the technology itself.
Law as a social phenomenon should be given the role of a kind of "social DNA" that can prevent a person from becoming a computer creator, at best, into his
servant or even a slave, and at worst-in general, was pushed out of the
civilizational circuit.
A separate direction should be the introduction of common ethical principles for AI systems for all developers and users. The most optimal approach in this aspect is the one implemented within the framework of the Azilomar Principles. We believe that the Principles can become the basis for supranational mechanisms legal regulation of the field of AI development and implementation.
Meanwhile, in the international arena, the key attention of most organizations is focused on issues related to gender inequality and minority rights; a number of countries (USA, EU.) consider these issues to be fundamentally important in the implementation of AI, while as other countries, including the Russian Federation, believe that these issues lie outside the plane of the development of AI technologies. The problem of access to technology and digital inequality between
different countries and social groups; uneven
levels of education and economic development; monopolization
a whole stack of AI technologies by global players also leads to calls from international organizations for countries
and technology corporations to move from competition to open cooperation, strengthen the exchange of knowledge, resources and tools in the field of AI, as well as strengthen multilateral and interagency cooperation.
Maintaining an optimal balance between the interests of the public based
on the precautionary principle, or balancing the known advantages, disadvantages/risks and possibilities of using the systems
AI is a fundamental thesis in the creation discussions.
global regulation of the implementation of systems with AI.
Potentially, abandoning the precautionary principle could make it impossible to hold global corporations accountable for the unintended consequences of widespread adoption of AI solutions. Issues of liability for damages, for example, in the case of the use of AI in management systems, can also have extremely significant socio-economic consequences. Under these conditions, the consolidation of the world community and the development of a conceptual international document on the principles of and the basic principles of AI regulation seem more relevant and necessary than ever. In particular, deep processing will require intellectual property institutions, the tax regime, etc., which, ultimately, will lead to the need to solve the conceptual problem of giving autonomous AI a set of certain "rights" and "obligations". In our opinion, it is optimal to give the AI a specific "limited" legal personality (through the use of a "special" legal fiction), in terms of giving the autonomous AI the responsibility to be responsible for the harm caused and the negative consequences. This approach will undoubtedly require a future rethinking of the key
postulates and principles of criminal law, in particular, the institutions of the
subject and the subjective side of the crime. At the same time, in our
opinion, systems with AI will require the creation of an independent institute of criminal law, unique in its essence and content, different from the traditional anthropocentric approach. Humanity will have to abandon the tendentious interpretation of AI as a means, equipment, and tools for achieving certain goals,
in its application, controlled by a person. In addition to this when building a legal concepts you will need to assume that in its nature of artificial intelligence is
more static than human intelligence, education, it is much more resistant to external stimuli, due to what his perception of reality is not able to endure harm
from a sudden strong emotion (affect), not can be entered in the condition that requires the limitation of its capacity, according to which legal institutions finder
successful application in the regulation of human relationships, can hardly be applied to the field of AI use. A new legal institution will require an entirely new approach. Within the framework of this institution, it seems appropriate to provide for different from the traditional understanding of the subject, based on the
symbiosis of technical and other characteristics of AI, as well as alternative types of responsibility, such as deactivation, reprogramming or granting the status of "criminal", which will serve as a warning to all participants in legal relations. We believe, that such a solution in the future can minimize the criminological risks of using AI.