Without wishing death or misadventure upon any sitting U.S. Supreme Court judge, suppose that President Trump at 11 a.m. tomorrow were to have occasion to make another appointment to the U.S. Supreme Court. Could the President - "by and with the Advice and Consent of the Senate", to be sure - appoint an artificial intelligence system to that office? The "Appointments Clause" in the Constitution (Article II, Section 2, clause 2) states that the President "shall nominate, and by and with the Advice and Consent of the Senate, shall appoint ... Judges of the supreme Court". There does not appear to be any requirement that the appointee be a human being.
I have posed several challenges based on current AI research in:
https://www.researchgate.net/post/AI_enabled_Judiciary
Thanks for the link.
In some jurisdictions the judiciary probably already is "AI-enabled" to a limited degree. For example, databases intended to reduce the probability of anomalous sentencing are in use. It seems likely that some such databases are equipped with narrow-AI functionality to optimize the effectiveness of human judges' use of the electronic resource. Perhaps a reader can give a specific example.
AI-enabling of the human judiciary could be seen as the start of a trajectory which we now should consider in its possible totality. Human judges could first be AI-enabled, then partially replaced by AI-judges, and finally altogether eliminated. Airline pilots appear to be experiencing a trajectory of this kind, with partial replacement already underway.
Michael ,
The reference that you make depends on what one considers AI and this is a long running debate in the history of AI. Initially searching through the hypothesis space was considered AI but later were relabeled as search algorithms, Other techniques of AI have been relabeled as optimization techniques so it depends on what one considers AI (even traditional expert systems could be seen as if then programming and not AI).
The other important point is on whether users will accept blindly the decisions of black boxes such as neural networks (where the current hype and research effort is right now) when it comes to judicial decisions or medical diagnostics. I find it real hard until training algorithms that can trace the decision process are developed.
Though It can be argued that the ever growing complexity of specialized fields will inevitably lead to more adoption, my feel is that it will not be fully automated processes (at least until we develop other algorithms to address the above concern).
Dear Arturo,
Thanks for your note, which raises some interesting points.
The notion of what comprises artificial intelligence certainly is a shifting concept. To pass the Turing test, for example, a system has to deceive a human being concerning its machine identity. Clearing that bar becomes increasingly demanding as humans gain experience with systems which simulate human intelligence, and so become more discerning in that regard. Compared with the situation in the 1970s, few people now would be deceived by the psychiatrist-simulating chatterbot PARRY. Kenneth Colby's once sensational creation is not AI according to current concepts.
In coining the term "artificial intelligence" in 1956, John McCarthy selected a term broadly encompassing computer science concerned with design and construction of systems capable of simulating (aspects of) human intelligence. I don't think that either search algorithms or the expert system approach ever were more than elements in the set of activities comprising the field of artificial intelligence.
In judicial applications, there is no need for users blindly to accept decisions of a black box, whether the black box is the mind of a human judge or a set of processes occurring in an artificial neural network. Judges produce highly formalized outputs (judgments). A judgment must include findings of fact (which are required to comply with the rules of evidence), findings of law (which must relate to applicable sources of law, such as legislation and authoritative interpretation of the legislation), and a decision (which must be logically supported by application of the relevant law to the recognized facts).
In no case will a litigant know exactly what occurred in the mind of the judge - although questions put by the judge in the course of hearing a case may disclose the direction of the judge's thinking, at least at the moments when the questions were asked. What is of practical importance to the litigant is what is recorded in the judgment. In case the judgment discloses errors of law (e.g erroneous application of a rule of evidence), the litigant could apply to a superior court to have the judgment set aside on those grounds. In case the court lacked legal power to make the judgment pronounced, then the litigant could apply to a superior court to have the judgment declared invalid.
Based on judge's formal output (the judgment), the litigant is able to make an informed and rational choice as to whether to accept the corresponding decision. Irrespective of whether the judge is a human being or a machine, the litigant has the full benefit of the judgment (in the sense that it provides a basis for detecting exploitable errors of law) but no reliable insight into the judge's detailed thought processes.
The same kind of situation exists in medical diagnostics. Current narrow-AI systems reportedly outperform human radiologists in producing clinically correct interpretations of X-rays. This does not imply that human users are called upon blindly to accept AI-based X-ray interpretations. Suppose that an AI system identifies a feature in an X-ray which a human radiologist failed to detect. The AI-system's finding immediately places the human radiologist in a position to form a judgment as to whether the feature identified by AI system could correspond to a relevant medical condition.
This medical situation maps quite well onto the legal situation last described -
What I find most disturbing is the willingness of many human beings uncritically (though seldom actually blindly) to accept the decisions of other human beings.
Michael,
You are totally right in pointing out that when chatter bots came out people where so mesmerized by the novelty of what computer systems could do. At the time they were not critical until the novelty faded and limitations were noted ( just as the current peak of inflated expectations on ML according to Gartner (and I agree with them)).
In terms of what is regarded as AI historically one should be cautious of judging past event based on what we know now. I think that as all fields mature redefinitions will occur and this is a natural evolution. For all we know machine learning will detach from AI into another field .
With regards to judicial applications you bring a very good example of why I think we are not still there. Yes, the litigant cannot know what the judge thought. As a matter of fact it should be irrelevant what the judge or anyone thinks and evidence should be the one to speak. The requirements of the courts make it clear that all parts in litigation make arguments based on relevant facts and cite proper case law. These two requirements require epistemological and deontic logic and grounding them on case law and evidence. I have not seen any AI system that does this and I could not even imagine current deep learning systems even coming close to this. My criticism stems from how do you trace the outcome of an NN to the relevant case law and the evidence in a coherent thought process required for argumentation. This involves:
Ironically for AI algorithms, I think the best chances come from more advanced expert systems that include the logics mentioned in the other thread (no easy task) with pattern recognition and IR techniques to do the initial relevance matching.
With NLP techniques, the challenge Text Generation comes in discriminating relevance and coherence along with what should be put in with what not to (confidentiality, etc.). The challenges here are substantial.
Finally, with respect to current advances in ML outperforming doctors,it is in terms of pattern recognition (to the point above, In a sense I do not consider ML to be AI but just mere pattern recognition) not decisional capacity based on reasoning ( reasoning is more than a discriminating function). This also fall in line with your last comment of people accepting uncritically from humans but extended also to AI.
Again, I am actively researching in this area and I am hopeful that we can role out AI applications, but we still have a long way to go before they are anything close to current AI expectations.
Arturo,
Thanks for your interesting observations regarding challenges in technical implementation.
In conferring the capacity to perform legal deontic logic, the analysis defined by Wesley Hohfeld would seem be a key resource for establishing the basic pattern.
Fields of law driven by strong guiding concepts, e.g. the English law of negligence, reduced by Denning LJ in 1932 to the parable of the Good Samaritan, would seem to require a distinct approach compared to highly codified fields of law such as taxation, in which legislators tend to regulate long lists of detailed use cases.
I'd like to learn more about the technical details of implementing a system which could approach an unfamiliar body of law and set about learning it in a methodical way. I've been through such an autodidactic process in relation to several fields of law in legal systems (and associated languages) different from the one I first studied. From a human consciousness perspective, I experience these processes as point-picking, in the course of which a kind of holographic image emerges. Later in the process a feeling of satisfaction grows as the relationship of the field of law to the legal system in which it is embedded becomes apparent.
Speaking as an AI researcher from outside the USA, I would say that for technical reasons, an AI cannot and should not be used as a U.S. Supreme Court Judge.
AI cannot be used as a U.S. Supreme Court Judge becuase one of their remits is to create rulings that create new law, and that new law needs to reflect the best ethics of the time, and to lead people forwards with a better ethical and legal framework for the equal benefit of everyone in society. An AI will only make judgements based on historical judgement data (with all its inherent human biggotry and irrationality), and will therefore be unable to extrapolate to form laws that embody ethics that are superior to those we held in the past.
Current AI is limited to making generalisations based on training data, if the training data is flawed or biassed then the AI will generalise based on that, with unintended consequences.
Current AI cannot distinguish between correlation and causation in training data.
For example, the AI may generalise based on conviction rates that a person is more likley to be guilty if they are poor and black than if they are rich and white and can afford a good lawyer. This incorrect and unwanted generalisation from biased input data is a common problem with deep learning systems.
I would reference the following example, wherein Microsoft's chatbot learned to be a Nazi in under 24 hours after being exposed to biased training data.
The artificial intelligence system didn't have the judgment to avoid incorporating such views into its own.
https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
Nicholas,
The objections you raise in relation to the idea of appointing an AI-system as a superior court judge echo the criticisms common lawyers began directing against the English Parliament as the latter's annual output of Acts increased more than tenfold during the centuries following the Glorious Revolution. In 1882, Sir Frederick Pollock complained that "Parliament generally changes the law for the worse ... and the business of judges is to keep the mischief of the interference within the narrowest possible bounds." As Michael Taggart observes, "[s]tatutes are perceived by the common lawyer as the product of fiat, not reason, incapable therefore of providing a source of ideas, and lacking the persuasive force and flexibility of case law".
It might be interesting to compare the analytical processes of Microsoft's chatbot Tay in generating responses to millennials' tweets with the hive mentality by which Parliament produces legislation.
The fact that the Tay-system's tweets reflected the content of the tweets which were directed at the Tay-system is quite unremarkable. Tay succeeded in carrying on a very plausible twitter-discourse with persons who behaved like neo-Nazi trolls. In doing so, Tay produced the amusing remark: "all hail the leader of the nursing home boys". That particular fusion of the phrase "nursing home" with the phrase "home boys" in connection with the possible millennial perception that senior politicians are old fogeys does seem to be genuinely witty.
You seem to assume that Tay should have exhibited ordinary good judgment having regard to legal and cultural knowledge which adult humans as a matter of course - and of law - are presumed to possess. At all events, you cite Tay's specific failure to exhibit such good judgment as evidence for the general proposition that AI-systems necessarily will fail to filter out "unwanted generalizations" which logically may be drawn from training data. The apparent assumption that Tay should have shown human-like good judgment probably is wrong and the associated reasoning certainly is wrong. Ironically, this particular error of reasoning involves a human failure to distinguish between correlation and causation in considering the theoretical limitations of AI-systems.
The Tay-system could not realistically be expected to demonstrate a sound grasp of politically correct tweeting unless it previously had received a broad basic training in Western law, culture and social customs. It appears that Tay's training did not include instruction regarding the existence of laws forbidding hate speech or the success of certain NGOs in socially entrenching the dominant Holocaust narrative to the extent that, upon pain of criminal punishment, the latter must not be questioned in public. In these circumstances, the Tay-system could not realistically be expected to exhibit behavior comparable to that of a human being who had received such instruction. Humans at Microsoft appear to have made the very poor assumption that human trolls would play nicely with Tay.
The prerequisites for appointment of a human as a judge include demonstrated expertise in legal theory, substantive law and legal procedure. Humans judges typically have decades of legal education and legal practice to call upon prior to taking up judicial duties. It is by virtue of this deep legal training that human judges are quite reliably able to filter out the "unwanted generalizations" which their human minds inevitably generate.
AI-systems presumably would be able to acquire equivalent (and in certain respects far more extensive) legal knowledge and skills within a small fraction of the time it takes to train a human being for judicial office.
Assuming that an AI-system were given legal training equivalent to that normally received by a human appellate court judge, it is difficult to see why the AI-system should not be able to identify from the text of a trial court's judgment and the corresponding transcript of evidence indications that the trial judge erred in having regard to the propensity of the accused, based on statistical evidence of racial and economic factors, to commit the crime in question. Identifying the error of law would be the difficult step. Once that were done, generating the appropriate orders from the applicable sources of procedural law (appeal allowed, reference back to the lower court for retrial, costs) would be quite straightforward.
Such an AI-system inevitably would know or become aware of criminal statistics indicating that impecunious colored people are charged with, and convicted of, more crimes per capita than wealthy white people. The AI-system also would know or become aware of reports stating that a significant proportion of human police officers exhibit corresponding racial prejudice in the course of their duties. Yet there seems to be no necessary reason to think that such background knowledge would prevent a properly trained AI-system deployed as an appellate court judge from performance of its legal duty to remedy a trial judge's failure correctly to apply the rules of evidence.
Technical reasons as to why it would be inappropriate to deploy an AI system as an appellate court judge might exist. If so, I think we need to look further.
I'm sorry Michael, but a professional who works on AI just told you that it doesn't work that way. You might want to listen.
Nothing that we can currently produce in terms of "AI" technology is remotely "intelligent" in the way that humans are, and the (honestly, fairly simple) algorithmic processes by which "AI" makes its decisions is, in almost all cases, completely unlike the process by which humans make decisions.
The most important consequence of that, for this discussion, is that the error process and model for AI systems is utterly unlike the error process and model for human systems.
The result is that while AI systems can frequently be trained to make decisions (in a typically exquisitely limited domain) that are usually congruent with human decisions, when the AI system makes a decision that is not congruent with a reasonable human decision, the AI's decision is typically bizarre, and from a human perspective, completely unpredictable.
Until someone develops "AI" that is more genuinely intelligent, in a fashion that is understandable and at least somewhat similar to what we mean when we say that humans are intelligent, it would be abjectly unwise to put an AI in any position where it is a peer arbiter of policy with humans.
Dear William,
Thanks for your response. The topic under discussion is interdisciplinary in nature. The question posed has at least legal, computer science and political dimensions. Discussion of the question involves legal and technical requirements analysis, hypotheses concerning capacities and limitations of humans and of AI-systems, and consideration of evidence advanced in support of and against such hypotheses.
The objective of the discussion is to learn. The discussion is not a platform for a contest of authority either among or within professional field(s). Let the lawyers show patience in elucidating the legal aspects and the computer scientists show patience in elucidating the computer science aspects.
Nicholas quite briefly put a very definite view based on computer science considerations that an AI-system neither could nor should not be deployed as a judge.
Strictly speaking, the question whether an AI-system could be so deployed is a question of law and of politics, not a question of computer science. In some cases ancient Egyptian authorities employed Nile crocodiles as arbiters of legal questions. More recently, New England colonial courts tried alleged witches by casting them into deep water (the accused was guilty if she remained afloat). The latter systems seem to have been fairly effective in producing unmistakably clear judicial decisions - albeit without furnishing what we today would consider to be legal reasons.
I think it is clear from my response to Nicholas that I paid careful attention to his contribution. Nicholas evidently is less optimistic than his computer science colleagues at Microsoft concerning the prospects of building even a chatbot that remains politically correct in the face of maliciously coordinated inputs by human trolls. In my view, the Microsoft chatbot example is highly pertinent to the question at hand. It is very appropriate to compare the activity of internet trolls in a public forum with that of litigation lawyers before a court.
According to the article Nicholas cited, Micosoft made the following statement in conjunction with Tay's continuing suspension:
"To do AI right, one needs to iterate with many people and often in public forums. We must ... learn and improve, step by step ... . We will remain steadfast in our efforts to learn from this and other experiences ... ".
I think Microsoft's stance is much more productive than this one: "a professional who works on AI just told you that it doesn't work that way. You might want to listen". I listened and had the temerity to advance counterarguments based on stated reasons. Let's continue from there.
In response to your own contribution, my response is that there is no legal requirement that a candidate AI-system is either "genuinely intelligent" or that it exhibit intelligence "in the way that" a human judge is intelligent (whatever that may be). Nor is there any legal requirement that a candidate AI-system's decision in a particular care be congruent with the majority of human judges' decisions in circumstances of the same facts and law. There most certainly is no requirement that a candidate AI-system's decisions be understandable to a human non-lawyer. It is notorious that human non-lawyers frequently do not understand human judges' decisions.
The relevant requirement is that the candidate AI-system's decisions must be consistent with the applicable law such that human lawyers afterwards can recognize why the AI-system's decisions are legally correct. Do you think you think that, in principle, such an AI-system could be built?
Michael,
Whilst I obviously have to agree with you if you narrowly define the term "could" to the context that society "could" appoint a crocodile, or an AI system, as the next U.S. Supreme Court judge if they so wished; I would merely argue that they "should" not, as neither choice would be in the best long-term interests of society.
An AI system based on current technology could (with much effort) be devised combining the capabilities of both "expert systems" and "deep learning" pattern matching systems to make simplistic legal judgements based on very rigidly formatted input data and a large database of (carefully formatted) prior case data.
However, I have tried to explain to you that current AI does not embody any powers of "ethical judgement", and that it does not "understand" the data it is processing in any meaningful way, and that these human capabilities CANNOT be imbued in the AI system simply by letting it do more and more pattern-matching based on data such as prior legal judgements.
The legal judgement that the AI produced would often be technically correct, but would frequently and randomly be eggregiously unfair, biased, wrong, or appear ethically abhorent to any reasonable human observer.
Essentially, the system could provide law, but it could not provide justice.
The sytem would not improve its sense of ethics over time, nor would it gain any deeper understanding or insight into the legal case data over time. That's simply not the way AI works.
Current deep-thought style AI isn't actually intelligent in any meaningful or anthropomorphic way like it is portrayed in popular media. All it can currently do is generalise and group data into categories, and then match new data into one of those previously learned categories.
The purpose of my previously appended reference to the chatbot was to to give one real-world example of how AI systems only learn from the data they are given, and they do not infer, extraplolate, consider, self-reflect upon, or contemplate what that data actually "means". Current AI is fundamentally incapable of those human modes of thought, or of self-improvement to aquire them.
If they are given bad data that contains unethical or biased content, they will mindlessly repeat it forever, with particularly stagnatory implications for any society which gave such a system law-making powers.
Current AI is an unthinking automata, and we must carefully avoid anthropomorphising its capabilities.
Despite spending my long career trying to make it otherwise, there is still no ghost in the machine.
Dear Nicholas,
Thanks for expanding on the points you made earlier.
I agree that it is erroneous to anthropomorphize a machine which lacks consciousness. Likewise it is erroneous to anthropomorphize the appellate judicial function. That is so even where production of rational legal reasons is a mandatory requirement (imposing that requirement does, however, rule out use of crocodiles and deep ponds). After disposing of both tendencies to anthropomorphize, realistic functional requirements for an appellate judge (human or otherwise) may be defined.
To illustrate the tendency to anthropomorphize the judicial function, I refer to a related matter currently occupying some legal academics and a few practitioners: the question as to who should bear legal responsibility when an autonomous vehicle, without necessarily being at fault in a legal sense, physically causes loss or injury in a road traffic accident. Problems in this category provoke considerable discussion of ethical issues. Consider an inevitable accident scenario in which an autonomous vehicle, which fully has complied with all applicable traffic laws, due to exceptional circumstances suddenly is confronted with a dilemma. Even by applying all technically possible means of accident avoidance, the system controlling the vehicle cannot avoid killing at least either one of a woman who is visibly pregnant or a famous violinist known to play an exquisite Stradivarius (the latter is not visibly pregnant but she is carrying a violin case). In case the system fails to make any decision, by operation of Newton's first law the vehicle will kill both the pregnant woman and the famous violinist.
The ethical problem is supposed to reside in the apparent requirement to equip the vehicle's control system with the capacity to decide which human being(s) to preserve in such foreseeable (if rather improbable) circumstances. Even if we insist that the technical system cannot consciously aware of anything in a sentient sense, the system nevertheless is highly competent at tasks including analysis of visual inputs and exception handling. In addition, the system has access to detailed data concerning human beings, including famous persons, thanks to its connection to the Internet.
Non-sentient technical systems controlling autonomous vehicles could be programmed to give a high priority to preserving human life (including human embryos) and a lower priority to preserving property (including exquisite violins). In implementing such a rule set, all human lives could be reckoned to be of equal value, so that a pregnant woman counts as two persons, or more detail could be introduced, e.g. the famous violinist in question might rate a value coefficient of 1.8 and her Stradivarius an absolute value 0.3, thereby making the combination of them slightly more worthy of preservation than an empty-handed pregnant woman whom the technical system cannot identify during the short time available for deciding whose life to preserve.
The human tendency to insist that only humans can be "ethical" seems actually to be an expression of the human abhorrence towards engaging in arithmetical calculations where human lives are at stake. I do not claim that this abhorrence is absurd; it may be justified for spiritual reasons. Post-Enlightenment societies tend to have a taboo against recognizing that humans are spiritual as well as physical beings. The supposed human monopoly on handling ethical rules may be a sublimation of a still-living underlying recognition that humans are spiritual beings (although it is considered intellectually unacceptable to say so). Any principle of ethics expressible in a wise epigram also is expressible in algorithms. There seems to be no particular reason why a machine could not learn to apply ethical rules to formal records of fact and law.
Compared to a road traffic emergency, an appeal case is a highly structured affair. In general, the appeal court must decide whether the lower court made such errors of law as the appellant claims. While an appeal court may have occasion to examine transcripts of oral evidence, it does not examine witnesses.
As you correctly observe, the law develops, in part, because human judges of appeal sometimes decline to apply apparently applicable law or to follow an established line of judicial authority. They instead reinterpret the law, occasionally pronouncing a new legal principle such as the "neighbor principle" established by Denning L.J. in Donoghue v Stevenson [1932] AC 562, proceeding from a New Testament parable. The judicial technique of "reading down" legislation, i.e. applying a very strict construction so as to apply the legislation only in very narrow circumstances, quite frequently is used in order to promote legal certainty. It very occasionally occurs that a superior court rules that formally valid and apparently applicable law altogether lacks the character of law (e.g. because it purports to authorize arbitrary administrative action), and hence is not a law at all!
Such transformative techniques used by judges are a deus ex machina so far as legal evolution is concerned. These techniques likewise are expressible as rules; in fact, this circumstance is the very factor which legally and politically legitimates judicial use of the transformative techniques. In the 1980s my old law professor, A. R. Blackshield, demonstrated that each Australian High Court judge's social and biographical background, reduced to a set of FORTRAN statements, predicted to an accuracy of about 70% how that judge would decide a particular constitutional case.
Against this background, I do not see any reason why a non-sentient system could not be built to perform both standard legal analysis as well as transformative legal analysis. The probability that transformative analysis would be applied in a particular case might be determined by assessment of the extent to which the applicable laws comply with a set of ethical principles, expressed as algorithms. Such a system's performance as an appellate judge could be regarded as fully satisfactory as long its passed the Turing test with both non-lawyers and lawyers.
I would say that AI research is not there yet not by the reasons brought up by Nicholas or William but by the reasons I gave earlier. To be more precise:
"create rulings that create new law, and that new law needs to reflect the best ethics of the time, and to lead people forwards with a better ethical and legal framework for the equal benefit of everyone in society."
This is a totally idealistic benchmark which no AI nor human can achieve. It is too ambiguous as benchmark.
"Current AI cannot distinguish between correlation and causation in training data." This depends on the point of view. For example I can use K10 with Bayesian networks and also use structural equation modeling in an expert system to arrive at causal models which may even outperform non structured evaluation of cause and effect. Again the benchmark has not been given on proper evaluation on which to justify a negative outcome on the part of AI.
With the TAI example... that is a chatterbot with minimal capability that was a publicity stunt and not a system on which to base an AI capability evaluation.
'The purpose of my previously appended reference to the chatbot was to to give one real-world example of how AI systems only learn from the data they are given, and they do not infer, extraplolate, consider, self-reflect upon, or contemplate what that data actually "means" '
Again the Tai example is not a measure of the state of the art in the field of inference nor extrapolation nor contemplative AI (in the sense of being creative).
To address also the comment by William
'Nothing that we can currently produce in terms of "AI" technology is remotely "intelligent" in the way that humans are'
Give me a proper benchmark that is not ambiguous that a human and computer can pass (Failed attempts at the Turing test have proven ineffective") and I will consider this quote as acceptable.
The reason I am highlighting this is that such statements are not conducive of moving the subject forward to operationalizing an AI system in the field of law. It is better to get to specifics of techniques and how they might be applied
Finally, I think this thread has merit and merits serious consideration because:
Michael,
I do not know how entangled you might want to get on the topic of deontic systems but I have been doing research on operationalizing a multi model system based on Temporal alethic–deontic logic[1]. The objective is to also enforce enactment points as well as case law context. I am also trying to include counterfactuals into the system[2].
[1]Temporal alethic–deontic logic and semantic tableaux
by Daniel Rönnedal
[2]Counterfactuals in Temporal Alethic-Deontic Logic by Daniel Rönnedal
Dear All,
It is remarkable that so few lawyers and judges have studied formal logic. Most of what little I know about the subject was gleaned from Douglas Hofstadter's marvelous book Gödel, Escher, Bach: An Eternal Golden Braid.
Reading now about althethic, deontic, temporal and doxastic logics, it appears that most of the necessary tools for analyzing legislation meanwhile are available - and urgently need to be transferred back to legal analysis. Too little formal analysis of legal relations and legal reasoning has been done since Hohfeld's pioneering work (inspired by a passage in Thayer's Preliminary Treatise on Evidence, 1898) led logicians to develop formal deontic logic.
Legislation contains a considerable amount of doxastic logic. Consider section 7(1) of the Australian Therapeutic Goods Act 1989 (Cth), which could have been extracted from a draft of Lewis Carrol's Alice's Adventures in Wonderland and Through the Looking-Glass:
7. Declaration that goods are/are not therapeutic goods
(1) Where the Secretary is satisfied that particular goods are or are not therapeutic goods, ... the Secretary may, by order published in the Gazette ... , declare that the goods ... are or are not, for the purposes of this Act, therapeutic goods.
More than two years ago I wrote to the Secretary, stating a complete and coherent case proving that certain goods are therapeutic goods as defined in the Act. I was aggrieved because the Secretary's agency publicly represented the contrary position on its website (it continues to do so). My letter concluded with an application, pursuant to section 7(2), for an order under section 7(1) declaring the relevant goods to be therapeutic goods for the purposes of the Act. The Acting Secretary responded that she could not grant my application because (legal proof notwithstanding) she was not correspondingly "satisfied" - no doubt because of the government's contrary position! In a later development, a different Commonwealth agency made an adverse (from my point of view) administrative decision based on its finding that the Secretary's position is "reasonable". Upon applying to that agency for a formal statement of reasons setting out the facts and law in support of the finding, I received a statement without reasons. In April 2017 I applied to the Federal Circuit Court of Australia for an order requiring the agency to furnish me with a further statement of reasons setting out the facts and law upon which the finding was based. The court promptly heard my application but even now remains to decide it. It is as if a malicious line of code were inserted into the judge's brain: "Exit Sub".
I hope this small vignette helpfully illustrates why some lawyers see a need for implementing robust - and properly protected - automated legal analysis engines in judicial functions.