The Black Box of Bail Algorithms: One Sensible Solution
(excerpt from Jurist - Legal News and Research - Mar 14 2019)
JURIST Guest Columnist Jeff Clayton, Executive Director of the American Bail Coalition discusses a recent Idaho bill proposal that seeks to reform the use of risk assessment algorithms in bail hearings. . .
Much needed legislation, which reins-in use of algorithms to determine who stays in jail pending trial and who gets out, recently passed in the Idaho legislature. House Bill 118, sponsored by Representative Greg Chaney (R-Caldwell), is first-of-its-kind legislation that addresses inherent flaws in the criminal justice system. The bill calls for the algorithms to be transparent and certified free of any bias. It’s a good start to addressing a problem that has afflicted not only Idaho, but the entire nation.
What are these algorithms and how do they work? They start by taking data concerning individuals who have been previously involved in the criminal justice system and then breaking it down into categories. A typical risk assessment tool will have around ten factors, including the number of previous convictions in a person’s criminal history, but there can be as many as 200. For example, age is a factor that correlates highly to the probability that a defendant is going to flee or commit a new crime while out on bail. (It should also be pointed out that age is one of the biggest predictors in criminal justice. Individuals typically “age out” of crime as they get older.) Regression analysis is then performed on the data, which reveals mathematical correlations. After the entirety of data is analyzed, an overall risk score is produced, which is then given to judges.
There are a number of fundamental problems with the algorithms. Generally, their analyses are not shared with either the public or criminal defendants. They don’t get to see the data that was used to build these tools. Also, they are not allowed to examine the regression analysis, nor are they permitted to perform even basic checks on the data, such as confirming the math.
As to why this is the case, the reasons lie in the murky world of corporate self-interest. In many instances, for-profit corporations are the builders of the algorithms, which includes COMPAS and the Arnold Foundation. These entities are fearful of others stealing their information and building their own algorithms, thus taking them out of the game. Also, the algorithms are often constructed from information gleaned from FBI crime files, as well as state and local criminal history databases. Builders and users of the algorithms are claiming that existing laws prohibit disclosure of such information to the public. In addition, these same algorithm proprietors are asserting they are trade secrets and thus protected from disclosure.
Another issue is that the majority of algorithms used around the country have never been tested for racial or other bias. The Arnold Foundation had an apparently phony research center called “RTI International” (that did not even have a website, much less a physical presence) perform a study, which unsurprisingly, concluded that their risk assessment tool was free of bias. Conversely, legitimate researchers from ProPublica found just the opposite for the COMPAS tool, damningly concluding that it was, in fact, “biased against blacks.”
To my knowledge, these two are the only risk assessment tools that have ever been tested for racial bias. Despite their being deployed in dozens of jurisdictions across the country from Virginia to California, no one has even bothered to check on what should be a foundational aspect of the other algorithms. Because of this systemic oversight, we can’t really know if the tools are inherently biased, but there is evidence to suggest they are.
Yet another problem is that risk assessment algorithms don’t seem to deliver on what they promise to achieve. They have been widely touted as being effective in decreasing pretrial incarceration and reducing new crimes while out on bail. However, after examining a variety of such tools, a paper published by the Minnesota Law Review concluded that they largely have little to no positive effect on the system, while even having a negative effect in a substantial number of cases.