Courts are notoriously overloaded. Added to this are long breaks during ongoing proceedings, which require repeated re-reading of the case. Furthermore, while judges are formally independent and external interference is hardly verifiable (although in the case of the Federal Constitutional Court, many now doubt this), even Don Corleone's blackmail was not verifiable – he only made offers to people they couldn't refuse. Thus, there are a number of factors that can lead a court to take a shortcut to reaching a verdict.

The problem here is the sheer power with which judges are ultimately endowed. While there are legal remedies for appeal and revision, this does not necessarily mean that the next instance will deal with the case more responsibly, nor can all defendants afford to take legal action. How can we better control the arbitrariness that is often suspected, especially in political matters (as suggested in this case by an expert opinion-like article: https://www.achgut.com/artikel/Gefaehrden_Karikaturen_den_oeffentlichen_frieden)?

AI may offer a way forward. The article in question sets out the aspects that the courts should have considered. Since the article was written by a judge, it can be assumed that he described the framework correctly. If an AI were trained to evaluate cases and compare them with the judgments, it would likely reveal that certain aspects were not taken into account in the judgment. The AI ​​is not intended to reach a judgment itself, but rather to analyze the completeness of the evidence.

If the AI ​​evaluation is made public, the court would at least be required to explain why it reached certain conclusions that only partially harmonize with the current one, or would have to face unpleasant criticism in an appeal/review (the opposite is also true, of course: if the appellate/review court wants to view certain cases differently, it would also have to provide more detailed reasons than is currently the case). At the very least, politically motivated judgments would be significantly more difficult to enforce, and presumably, the burden on appeal and review courts would even be reduced, because a lack of due diligence at the lower level would hardly be an effective justification.

How would such an AI be trained? Of course, such training should not end up in the hands of anyone with any ties to politics, as this would open the door to bias. Two possibilities are available:

(1) Training based on judgments in completed proceedings, taking all instances into account. During testing, the AI ​​should already notice that the above-mentioned judgment differs from other judgments because it does not take certain procedural aspects into account. Since judgments are always public, there would be no data protection issues.

Perhaps somewhat more specific, but also conceivable, would be the inclusion of case files, although this would be assessed more strictly from a data protection perspective. However, this would also reveal sloppy work by the investigating authorities in certain cases.

(2) Training based on "case reports." For this purpose, universities can prepare exam papers that analyze cases according to all legal rules, specifically highlighting procedural omissions, and are evaluated by professors for completeness and accuracy. Such reports could be prepared independently of time constraints and other pressures, would not represent a significant cost factor, and could potentially have a positive impact on future careers as judges or prosecutors.

In principle, such approaches already exist, although they are often met with critical comments that an AI could not ethically compete with or replace humans. However, that is not what it is intended to do, and there are different opinions about the ethical standards of some state lawyers. If such tools exist, we should consider using them sensibly instead of rejecting them outright.

More Gilbert Brands's questions See All
Similar questions and discussions