Why the criminal justice system should abandon algorithms

Why the criminal justice system should abandon algorithms

(excerpt from TNW Feb 6 2019)

Here’s a choose-your-own-adventure game nobody wants to play: you’re a United States judge tasked with deciding bail for a black man, a first-time offender, accused of a non-violent crime. An algorithm just told you there’s a 100 percent chance he’ll re-offend. With no further context, what do you do?

Judges in the US employ algorithms to predict the likelihood an offender will commit further crimes, their flight risk, and a handful of other factors. These data points are then used to guide humans in sentencing, bail, or whether to grant (or deny) parole. Unfortunately the algorithms are biased 100 percent of the time.

A team of researchers led by postdoctoral scholar Andrew Selbst, an expert on the legal system and the social implications of technology, recently published research highlighting the unavoidable problem of bias in these algorithms. The research paper indicates there are several major “bias traps” that algorithm-based prediction systems fail to overcome.

On the surface, algorithms seem friendly enough. They help us decide what to watch, what to read, and some even make it easier to parallel park. But even an allegedly unbiased algorithm can’t overcome biased data or inconsistent implementation. Because of this, we have what’s called the “Ripple Effect Trap.”

According to the researchers:

When a technology is inserted into a social context, it has both intended and unintended consequences. Chief among the unintended consequences are the ways in which people and organizations in the system will respond to the intervention. To truly understand whether introduction of the technology improves fairness outcomes, it is not only necessary to understand the localized fairness concerns, as discussed above, but also how the technology interacts with a pre-existing social system .

In essence, this means that by giving judges the choice to use the algorithm at their own discretion, we’re influencing the system itself. Human intervention doesn’t make a logic system more logical. It just adds data that’s based on “hunches” and “experience” to a system that doesn’t understand those concepts. The ripple effect occurs when humans intervene in a system (by choosing when to use it based on personal preference) and then that same system intervenes in human affairs (by making predictions based on historical data). The result is an echo-chamber for bias.

Perhaps the biggest problem with algorithms is that they’re based on math – justice is not. This is called the “Formalism Trap.”

The researchers wrote:

Failure to account for the full meaning of social concepts such as fairness, which can be procedural, contextual, and contestable, and cannot be resolved through mathematical formalisms.

Algorithms are just math, they can’t self-correct for bias. In order to make them spit out a prediction you have to find a way to represent the related concepts in terms of numbers or labels. But social concepts like justice and fairness can’t be represented in math or labels because they’re in a constant state of flux, always being debated, and subject to public opinion.

continue reading...

Facebook Comments