Artificial Intelligence's Place in the Criminal Justice System
It is a generally accepted principle that our criminal justice system is imperfect. The imperfections are evident in the presence of wrongful convictions, inadequate funding for indigent representation, jury bias, and political pressure placed on the elected judiciary. To be fair, imperfection might also be seen in juries failing to convict the guilty. Human systems inevitably always suffer the flaw of imperfect humans. The counter argument is always the same: despite its deficiencies, our criminal justice system is “as good as it can be” and that the presence of human error is outweighed by the rights afforded to those accused of crimes.
What if technology could solve these imperfections? We live in a world where scientists have already conducted studies using MRI technology to conduct brain scans to detect bias and lie detection. If juries are supposed to be fair and impartial, we can now visualize the presence of unfairness and bias through this imagining. What if we could eliminate such imperfections in jury selection or even in jury decision making? Is it science fiction to believe that we might create a world that allows artificial intelligence to make decisions on weight of evidence, credibility of witnesses, and guilt of the accused, entirely free from bias? Free of discrimination? No.
Some have argued that this technology could additionally serve as a convenience to those summonsed to jury duty who suffer economic hardship, miss work and business opportunities, and suffer time away from their families as a result of their jury service. Don’t we want a jury that can make complex legal and factual decisions free from bias?
We are also seeing the emergence of technology that utilizes artificial intelligence to assist courts with risk assessment in day-to-day decision making. Courts across the country are beginning to utilize A.I. to create algorithms to assess pretrial release factors, sentencing considerations, and other judicial decision making. A.I. technology is a tool lauded as removing the biggest element of imperfection from our system, human imperfection. The belief is that information necessary to make decisions by courts can be collected and utilized by these algorithms, saving the criminal justice money in personnel and increasing costs of incarceration rates. These risk assessments remove political pressure from elected judges in fairly deciding whether or not to allow someone out of prison while awaiting trial. These risk assessments create object safeguards in determining length of sentences, unfettered by human emotion and compassion. Is this what we want?
On the other hand, A.I. absolutely creates risks in its use. There are state and U.S. constitutional questions regarding due process, right to fair trial and right to juries of one’s peers, just to name a few. There certainly will be legislative questions. There are questions of transparency in the proprietary nature of the technology. If the criminal justice system will rely on this technology with such important decision making authority, there must be transparency. However, if we do allow transparency, then what about cybersecurity issues associated with allowing such transparency. What about hacking? We have seen the risk of artificial intelligence being hacked in self-driving vehicles. Concerns exist that a decision-making technology that calculates risk could be hacked to the benefit or detriment of those being assessed or sentenced. And what about the risk of good old fashioned technological error? Does software and technology always work the way it should? Imagine a software update that cripples your cellular phone and now imagine that same sort of “bug” in deciding guilt versus reasonable doubt. Can an algorithm be created to judge reasonable doubt if reasonable doubt is based on logic and common sense?
Is the imperfection of humanity something we want to remove from our criminal justice system? Yes, there are flaws, but there is also value in the application of “flawed concepts” such as mercy and compassion. Can A.I. understand the complicated nuances of human addiction to addictive substances? Mental health issues? Mitigation that can show not just who someone is but, more importantly, why they are the way they are? Can A.I. predict likelihood of showing up to court if the accused is allowed pretrial release? Can A.I predict the likelihood of recidivism? Relapse? Most importantly, can A.I. predict the human tendency of occasionally “defying the odds” and turning one’s life “around” despite indications to the contrary?
One of the most valuable components of decision making in the criminal justice system is the fact that decision makers, whether judges or juries, bring their own life experiences to the table when making decisions. Can justice exist without a human decision maker that brings human experience to the table? Alternatively, are we being truthful we say, “We just want decision makers that are fair and impartial and that can separate fact from emotion?” Wouldn’t artificial intelligence provide us the goal of this lofty and nearly impossible ambition?
Regardless of how you fall on the issue, A.I. is currently being utilized in some courts in the American criminal justice system. There will be litigation about this utilization, to be sure. At the end of the day, the question remains, do the risks of using this technology outweigh the benefits to society and those accused of crimes? Do the risks create a better justice system and safer world? Everyone enjoys the idea of a self-driving vehicles until the vehicle is hacked and the decision maker is shown to be imperfect. Can artificial intelligence be imperfect? Is its potential imperfection better than human imperfection? We shall see.
Franz N. Borghardt