Tech

The futures of many prison inmates depend on racially biased algorithms

Prisons use algorithms to predict recidivism, but the code is biased against black offenders.

Photo of Selena Larson

Selena Larson

Article Lead Image

It sounds like something out of Minority Report: software predicting the likelihood of people committing crimes in the future and assigning people scores that judges and cops use to determine sentences and bond payments.

Featured Video

But these algorithms are real and widely used in the United States—and according to a new ProPublica report, this software is biased against African Americans.

The scores and data produced by risk-and-needs assessment tools like the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which ProPublica investigated, are based on a series of questions that offenders answer as they move through the criminal-justice system. (In some cases, the data also come from their arrest records.)

There are no questions about race, but the surveys include inquiries like “How many of your friends/acquaintances have ever been arrested?”, “Do you have a regular living situation?”, and “How often did you have conflicts with teachers at school?” 

Advertisement

A computer program analyzes the survey results and assigns a score to each offender that represents the likelihood of them committing a future crime. As ProPublica reported, offenders don’t get an explanation of how their scores are determined, even though judges and cops rely on them—or at least take them into account—when making important decisions about offenders’ fates.

ProPublica analyzed 10,000 criminal defendants and compared their scores to their actual recidivism rates over a two-year period. The publication found that black defendants were regularly assigned higher risk scores than were warranted and that black defendants who did not commit a crime after two years were twice as likely to be misclassified as higher-risk than were white defendants.

The tool also underestimated white defendants’ recidivism rates and mistakenly labelled them as lower-risk twice as often as black recidivists. 

Other findings include:

Advertisement
  • The analysis also showed that even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 45 percent more likely to be assigned higher risk scores than white defendants.
  • Black defendants were also twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism. And white violent recidivists were 63 percent more likely to have been misclassified as a low risk of violent recidivism, compared with black violent recidivists.
  • The violent recidivism analysis also showed that even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 77 percent more likely to be assigned higher risk scores than white defendants.

This data-driven approach to criminal justice is intended to reduce the number of people in prison, save states and localities money, and reduce recidivism rates. But the use of algorithms and risk assessments to predict future crimes is not without controversy. In Wisconsin, one offender is appealing a conviction to the state’s supreme court on the grounds that the use of COMPAS violates his right to due process. 

University of Michigan law professor Sonja B. Starr, who has studied the use of algorithmic-based risk assessments, said the surveys can adversely impact low-income offenders. 

“They are about the defendant’s family, the defendant’s demographics, about socio-economic factors the defendant presumably would change if he could: Employment, stability, poverty,” Starr told the Associated Press in 2015. “It’s basically an explicit embrace of the state saying we should sentence people differently based on poverty.”

Advertisement

Algorithms shape everything from our Facebook feeds to the ads we see online to prison sentences, so it’s natural that questions are arising about whether and how they are biased.

The more companies test out these kinds of systems—such as IBM’s software that tries to spot terrorists in refugee populations and predict jihadist attacks—the more concerned people become about how ethical it is to use computers to find patterns in human behavior.

Algorithms are imperfect. They are written by fallible, biased human beings. And because they are the product of deliberate decisions made by biased programmers, their use can reinforce stereotypes and bias.

As machine learning researcher Moritz Hardt writes:

Advertisement

An immediate observation is that a learning algorithm is designed to pick up statistical patterns in training data. If the training data reflect existing social biases against a minority, the algorithm is likely to incorporate these biases. This can lead to less advantageous decisions for members of these minority groups.

ProPublica’s report is a reminder that predictive algorithms are not “neutral” or “fair” simply because they’re software. And because the companies that make them don’t disclose their secret sauce, it’s impossible to know how the programs generate their results.

 
The Daily Dot