AI judges can conduct efficient trials with vast databases and accurate case law analysis, but they lack the empathy and sense of justice that only humans have. While the automation of laws and trials may offer convenience, it is still debatable whether it can replace the flexibility and moral judgment of human judges.
Driving a rapidly changing society: automation
It’s no exaggeration to say that humanity’s passion for automation has been a driving force behind modern science and technology. The desire to automate things has been a long-standing desire of mankind, even to the point of automating the simple act of opening a door with a machine called an automatic door. Automation is a disruptive change in the process of humanity becoming gods and slaves to automated systems, which is often referred to as the fourth industrial revolution or hyper-connected society.
While the steam engine was the key to the first industrial revolution, electricity was the key to the second industrial revolution, and computers were the key to the third industrial revolution, the key to the fourth industrial revolution is sensors and artificial intelligence. Sensors are in charge of the peripheral nerves and A.I. is in charge of the central nerves, forming a giant organism that transcends humans and things. In recent years, artificial intelligence has proven its promise with examples such as AlphaGo and Watson, and the scope of its application is expanding as the performance of advanced sensors such as speech recognition improves significantly.
The feasibility of AI for judges
Despite its power, AI is fairly simple to develop. Implementing AI is like training a machine on a database. In other words, all you need is a problem and data about it, and the quantity and quality of the data will determine the performance of the AI. In fields like healthcare, where databases are rich and the problem to be solved is clear, AI like Watson can perform well enough to replace human doctors.
This is where the rise of AI judges comes in. Numerous precedents and vast statutes make for great databases, and trials are an area where the problem is clear. In fact, in October 2016, a joint study by University College London, the University of Sheffield, and Pennsylvania State University in the United Kingdom found that AI judges were able to predict trial outcomes with 79% accuracy. Dr. Nikolaos Aletras, who led the study, said that rather than replacing human judges, A.I. judges would help human judges by identifying patterns of rulings in complex cases. However, it is clear that the end goal of the research is to improve accuracy enough to replace human judges.
Roman Yampolskiy, a professor at the University of Louisville in the US, argues that the question of “whether AI can replace judges” is not the same as “whether it is desirable to replace judges”. As he says, it seems inevitable that AI judges will replace fickle human judges.
The purpose of artificializing law, courts, and judges
According to the ancient Roman jurist Domitius Ulpianus, law is derived from justice and is the art of goodness and equity. The weak are protected by the law from the strong, and the strong are controlled by the law so that they cannot harass the weak. Those who fail to fulfill their obligations are punished by the law, and those who do not are guaranteed their rights by the law. However, the law is implemented through trials, and trials determine the sentence of the accused according to the law.
From this perspective, a judge should be rational enough to put aside all personal values and look only at the law, but judges are human and cannot completely eliminate their subjectivity. Sometimes corrupt judges make wrong decisions for self-interest. However, A.I. is not subjective, so it can conduct trials “perfectly”. This is the motivation and purpose of replacing judges with AI.
Another reason to replace judges with AI is to address delays in trials. Currently, trials before human judges can take months before a decision is made, but A.I. judges can make decisions in a fraction of the time. They can also use a combination of scientific and statistical tools, such as polygraphs, to determine the credibility of statements.
Are AI judges just?
In the previous section, we discussed how A.I. judges could be implemented and why they are needed. Now, we’ll discuss the desirability of automating trials with A.I. from the perspective of qualities, rights, and humanism.
Before we discuss whether an A.I. judge has the qualities of a judge, let’s clarify what we mean by qualities of a judge. As mentioned earlier, laws are social norms set by the state, and a trial is a process of judging how much a person has violated these laws and determining the punishment accordingly. Therefore, if a person can memorize all the laws and reasonably determine the punishment based on case law, does that make them a good judge? No. A good judge is not someone who fulfills these conditions.
A good judge is judged to be just. But what is justice? This question is complex enough to fill a book, but let’s take a brief look at what a just judge is.
A just judge relies on the law, but respects lay people’s ideas of the law. They empathize with the case, feel compassion, and make decisions based on an understanding of social consensus. So, a just judge listens to the jury’s verdict, understands the law, but has the discernment to see beyond the law. Law is not an end in itself, but a means to an end, a means to social justice. If a law is found not to reflect justice and equity, it should be amended immediately.
However, A.I. judges lack this flexibility. To use an analogy, an A.I. judge is a student who is good at memorization. A.I. can only learn from the actions of its predecessors in a result-oriented manner. Therefore, an AI judge that only relies on past precedents will not be able to respond to new types of cases or legal systems.
Submission to A.I. judges
If a Brazilian judges a case in a Japanese court, the Japanese public would likely resent the decision. This is because the judge doesn’t understand the local culture and idiosyncrasies. People trust judges because they seem to empathize with them.
As Yuval Noah Harari noted in his book Homo Deus, humans use empathy to assess the guilt of others. An A.I. judge who lacks empathy would lose this right. Even if powerful AIs become mentally equal to humans in the future, humans will not fully trust them. We don’t understand their thought processes, so we have no reason to trust their judgments.
Judges should be human
Sovereignty is a very important issue in humanism. It is a universal truth that humans are their own masters, and that this right is inalienable. Trials are an important process for dealing with conflicts in human society, and it is not advisable for non-human AIs to be involved in this process.
The automated society that humans have dreamed of is a society that thinks, not a society that thinks for them.