AI in the Criminal Justice System
In the courtroom, you are presented two options: either have an algorithmic, data-driven tool evaluate your readiness for release from jail, or a stranger with years of experience and preparation decide your fate. Both choices could lead to the wrong decision; however, the question still stands, who do you trust more? Unfortunately, this is not a choice made available to individuals facing prosecution in states like New York, Florida, and California. Risk assessment tools are the newest addition to courts. Risk assessment tools work to analyze a defendant’s data and determine how the court should proceed with their case. With some individuals believing that the tools will implement more justice while others believe that no non-human should take up space in the courtrooms, risk assessment tools have received a lot of mixed opinions.
How do Risk Assessment Tools work?
Risk assessment tools predict the likelihood of a defendant recommitting a crime, referred to as a recidivist. One of the most popular companies in this field is Northpointe which owns COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a risk assessment tool. COMPAS formulates a recidivist risk score based on a defendant’s answers to a questionnaire. Their answers are then compared to the results of a high-risk offender. The individual is given a score from one to ten. Ten means the individual is high-risk, and one means the individual is low-risk. Northpointe does not release the algorithmic calculations behind the scores, which leaves the public only knowing that the results are based on questions regarding the defendant’s education level, employment history, relationships, upbringing, and lifestyle.
The History of AI in the Criminal Justice System
Risk assessment tools are not the only way AI has appeared in the criminal justice system. There has been a history of utilizing AI in the criminal justice system. The more flaws discovered about AI, the more these tools are renovated or removed entirely from the system. In the 2010s, PredPol, a company focused on using AI in the criminal justice system, utilized a machine-learning algorithm to classify areas as “high-risk areas.” Police officers would spend a portion of their shift patrolling the areas classified as “high-risk.” Although PredPol does not collect any data specifically about race or ethnicity, the areas classified as high-risk would mainly be Latino and Black neighborhoods. In Los Angeles, CA, PredPol was used for some time before the end of its use in 2020 due to its cost and unreliability. The use of facial recognition is another way the criminal justice system was introduced to technology. Facial recognition is used to identify potential criminals. However, facial recognition does a significantly better job of recognizing the faces of middle-aged, White men. This results in inaccurate identifications of criminals, and, in some cases, inaccurate arrests.
Where does the issue in Risk Assessment Tools Lie?
COMPAS has received a lot of scrutiny from the public for its biased results. ProPublica, a non-profit journalism organization, conducted a study on the bias and inaccuracy of COMPAS. The study proved that COMPAS did not have the best reliability in predicting if an individual would recommit a crime, especially with the predictions of violent crimes. This study also found that Black defendants were falsely predicted to recommit a crime at twice the rate as White defendants. More White defendants were also falsely predicted to be low-risk than Black defendants. (ProPublica). Although COMPAS does not ask about the defendant’s race, some of the questions on the questionnaire are biased because of racial disparities in the U.S. There are already biases implemented in the criminal justice system.
Risk assessment tools have proven to be inaccurate in real-life scenarios and detrimental to people’s lives. In the scenario ProPublica recounts, Brisha Borden, who is Black and who was mistakenly reported to the police for stealing a bike and scooter, scored higher in the risk assessment tool than Prater, who is White and shoplifted $86.35 worth of product. When looking at their past criminal backgrounds, Borden only had a record of misdemeanors when they were younger, while Prater had a record of armed robbery and attempted armed robbery. Despite this, the risk assessment tool predicted that Borden is more likely to recommit a crime. In the follow-up, two years later, it was found that Borden had not recommitted a crime and Prater had. The situation of Borden and Prater proves how the use of risk assessment tools without modifications to the system will allow for injustices and incorrect verdicts to continue occurring.
What is the future of risk assessment tools?
Many people still have a lot of faith in the future of risk assessment tools. People believe that risk assessment tools could improve the criminal justice system to be more just if the companies who make them are more transparent about the process, eliminate questions that could be related to identity, and serve as an assistance rather than a determiner in the choice of individual’s sentencing and probation. Although the future of AI is still uncertain, we all must be cautious when AI is introduced to society to avoid being misguided by new technology.