AI in CJR: What We're Missing and Misusing

It is the 21st century; In this millennium we have seen tremendous technological innovation and progress. The invention of drones, social media, electric motors, smartphones, blockchain, and so many other things that would make today unrecognizable just 23 years ago. These technological innovations have improved our lives in many ways, but  what are the dangers when we forge on ahead, without laying critical foundation?

While some states are experimenting with Artificial Intelligence lawyers and Digital DNA, there are other states that cannot tell you how many people are currently behind bars, how long they have been there, and on what charges, or even how many are detained pre-trial, without having been convicted of anything.

Innovations:

On the surface, innovations are created for meaningful change, to help some subsection of the population that isn't currently served by the status quo. The criminal legal sector has not been spared from the boom of technological creations. In fact, the criminal legal sector may have seen some of the most controversial of technological inventions whether it be social media, Face ID, Digital DNA, or even AI Robot lawyers.

As we break down these societal achievements, it begs the question, are these innovations truly creating meaningful, yet morally sound change? If these innovations truly are for the good of society, how do we ensure they remain so? This question seems particularly urgent in a state like Oklahoma which is struggling to keep up with the 21st century. Critical state agencies have extraordinary needs.OSBI and Oklahoma’s courts have not updated their data systems in decades. Prosecutors employ data systems that cannot communicate with County Sheriffs. The Department of Corrections has its own system that does not communicate with anyone else. This sort of ad hoc system, each agency for itself, method of data storage creates an absolute mess for law enforcement and policy makers who are forced to wade through data points that are not standardized and do not communicate. This system is harming Oklahoma by keeping everyone blind to the realities of the criminal legal system. Policy fixes become difficult to create when nobody can easily identify a problem, much less solve it.

AI Robot Lawyers:

Possibly the most futuristic of the technological advances is the addition of AI Robot lawyers. The AI creation runs on a smartphone, listens to court arguments and formulates responses for the defendant then it delivers these responses to the defendant in real time through headphones or bluetooth.

 A robot lawyer was set to take its first case on February 22nd, 2023.The company which designed the AI lawyer called “DoNotPay” released a statement saying, “after receiving threats from State Bar prosecutors, it seems likely they will put me in jail for 6 months…DoNotPay is postponing our court case and sticking to consumer rights.”

Though we won’t see the AI robot lawyer in action, DoNotPay still continues to offer legal services through their AI-generated form letters and chatbots to help people secure refunds for things such as in-flight Wifi that didn't work or to lower bills and dispute parking tickets. According to their CEO, AI templates have been utilized in more than 2 million customer service disputes and court cases.

While the public may benefit from democratizing legal representation and making it free for those who can't afford it, there are inevitably pitfalls. 

There are many unknowns to Chat Generative Pre-trained Transformer or Chat-GPT and GPT-3 which are the chatbots utilized to create the responses for the AI Robot Lawyer.

To put the issue plainly, GPT-3 and other versions are incredibly dumb. While the public has been led to believe this innovation will be replacing humans across the world in various sectors, seemingly as soon as tomorrow, in reality GPT-3 can string words together in convincing ways, but it has no idea what the words mean. Predicting that the word down is likely to follow the word fell does not require any understanding of what either word means — only a statistical analysis of how often these words go together.

While we delve into the issues of GPT-3 we must remember, assuming AI is capable or more human than is realistic is nothing new. In the 1960s, we met the original GPT-3, which was a chatbot called ELIZA which was developed by MIT computer scientist Joseph Weizenbaum. ELIZA convinced many psychiatric patients that they were interacting with a real psychiatrist by the way it responded. The chatbot was designed to repeat the patient’s statements as questions, a popular psychiatric technique at the time. This caused the patients’ to believe that they were interacting with a human being therefore creating the Eliza effect.

Weizenbaum even later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."  Researchers discovered users unconsciously assumed ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion.

Best that we don’t repeat history and instead remember that “AI” is merely a fanciful name we’ve given to a chatbot and that a chatbot in no way should be allowed to make potentially life-altering decisions for someone in the court of law. 

Digital DNA:

DNA technology is perhaps the most misunderstood innovation in the criminal legal sector. 

In 1985, DNA entered the courtroom for the first time as evidence, but it wasn't until 1988 that DNA evidence actually sent people to jail. While DNA evidence is incredibly powerful, it is also full of limitations and misconceptions. One misconception is that a DNA match is an absolute guarantee of the suspect's guilt. But this is far from the truth, forensic experts prefer to talk about probability. For example, they might make a statement like “The chance is 1/10,000 that an unrelated person would by chance have the same DNA profile as that obtained from the evidence.”  However, this does not equate into a 1/10,000 chance of innocence which is a common misconception known as the “Prosecutor’s Fallacy

Now that we see the intricacies of DNA when used for evidence. How does DNA evidence change when it is integrated with digital technologies?

Digital DNA uses a statistical method for DNA profiling that can replace manually profiling DNA, allowing laboratories to test relatively miniscule DNA samples that can’t be used with current technology. The perceived benefit of digital DNA is that formerly, in a case with minuscule amounts of DNA evidence, they would not have been able to use DNA evidence technology at all. However there are the negative aspects of digital DNA as well. 

One of the most commonly available software products is TrueAllele, which was developed by Cybergenetics. Regardless of TrueAllele being used for more than 20 years and in nearly 1000 cases, no one knows whether it works because the code is proprietary. This means that only the company that created the code has seen it.  Proprietary code makes cross examination of expert witnesses nearly impossible and only worsens the black box problem in our criminal legal system. On average there will be six flaws for every 1000 lines of code, and these Digital DNA systems have 170,000 lines of code.

Now there are no current laws or statutes to prevent someone from being incarcerated due to the use of digital DNA, in fact there are no laws to regulate these algorithms at all.

Face ID:

Face ID or facial recognition technology (FRT) uses biometric technology to measure and calculate human characteristics, which is then used as authentication or identification to access devices like your phone, or your social media accounts. When it comes to its intersection with the criminal legal sector, FRT is most often used by police as a means of identifying individuals who are under surveillance, but what about the issues not with the implementation of this technology, but with the mere existence of it. 

When you look in a mirror, your brain captures and stores the data of what it sees so that you can remember what you look like. The biometrics of Face ID function in essentially the same way; leading us to the first issues with FRT, unlike many other forms of data, faces cannot be encrypted

Due to the current volume of data housed in various databases (e.g., driver’s licenses, mugshots, and social media) the potential for harm is exponential because unauthorized parties can easily “plug and play” numerous data points to reveal a person’s life. Biometric authentication is more secure than a traditional password, making it difficult to hack, but inversely, data breaches involving facial recognition data increase the potential for identity theft, stalking, and harassment because, unlike passwords and credit card information, faces cannot easily be changed.

Globally, we are set to see an increase in cybercrime, but in legislatures across the country, politicians are still fighting for the misguided, antiquated policies of the war on drugs, war on terror, and other failed hard on crime policies. Instead legislatures should begin to look towards protecting our digital footprints. When it comes to Face ID technology, there are currently no federal laws concerning the use and protection of biometrics. A few states — Illinois, Texas, and Washington— have passed legislation called the Biometric Information Privacy Act, or BIPA, which regulates the use and collection of biometric data. For other states, like Oklahoma, there is no way of knowing how much of your data is being collected and what is done with it (Wong, 2020).

So What’s the Problem with All of That?

It is glaringly apparent that we don’t have the structure in place for these sorts of additions. On top of that, each new technologies come with their own individualized issues. Common concerns that stem from this lack of robust structures can be most simply boiled down to privacy issues. These privacy issues, more often than not, mean that consumers and citizens are being taken advantage of, unbeknownst to them. It is time for legislatures across the nation to leave the mistakes of the past behind and look towards not only how innovations could be used for true social good, but also how to ensure these technologies are not actively used against their citizens.