“The following op-ed was originally published on January 16 in the Los Angeles/San Francisco Daily Journal.”
While the topic of bail reform is not new, the debate is rapidly changing. There is continued movement across the country to implement no-money bail systems. This year, at least a dozen legislatures will consider bail reformers’ claims that risk assessment computers can figure out who is dangerous and stays in jail, versus who is safe and set free.
A year ago, it seemed the deployment of computerized risk instruments would completely remake the criminal justice process.
In just the past six months, however, scholars have begun to question the fairness and transparency of the algorithms used by these instruments. Concern is growing over their potential to perpetuate racial bias, rather than reduce it.
Researchers at New York University’s AI Now Institute recommend that criminal justice agencies no longer use “black-box” and “proprietary algorithms” due to serious concerns over due process. This has led Kate Crawford, the cofounder of AI Now and leading researcher at Microsoft, to call for a fundamental right to “data due process.” At a minimum, the risk assessment algorithms “should be available for public auditing, testing, and review, and subject to accountability standards.”
Concerns over proprietary algorithms are real. The entities behind them have protected their methodologies and source materials, while hiding behind contracts with public entities and assertions of intellectual property. In the process, they have locked-out the public from knowing how they were constructed — not to mention the actual parties whose freedom is at stake. There is no way for anyone to verify the information upon which the algorithms are based or to check their math, much less test it for possible bias.
In one egregious example, the proprietor of one such algorithm was blatant in its social advocacy. The Laura and John Arnold Foundation filed an amicus brief in the case of Holland v. Rosen in order to stave off a constitutional challenge to New Jersey’s new bail system, which has almost entirely eliminated financial conditions of bail. In that case, which is currently pending before the 2nd U.S. Circuit Court of Appeals, the plaintiff is arguing that he has a right to consideration of a financial condition of bail on a level playing field, along with all other non-financial conditions.
Laura and John Arnold Foundation’s “neutrality” on bail reform just a facade?
Filed by former U.S. Solicitor General Seth Waxman, the Arnold Foundation’s entry into this case, is indicia of specific political bias by the proprietor of an algorithm. This is problematic because nearly every state using the algorithm allows not only consideration of financial conditions of bail but defined financial conditions of bail as bail. If proprietary algorithm providers are allowed to enter the world of advocacy, this places greater importance on the fundamental data due process and demands the need for absolute transparency in its implementation.
Regarding the issue of race, the nonprofit investigative publication ProPublica specifically accused one proprietary algorithm of being “biased against blacks.” An examination of the research reveals that such bias is inevitable, which leads to critical questions: At what point does the amount of bias render the instrument as bad public policy? Can we determine when the bias has met the high hurdle of a disparate treatment claim? Similarly, should users and proprietors be required to meet the standards of a higher regulatory bar — such as showing that it does not have a disparate impact upon protected classes — prior to using the algorithm?
The reward for implementing the risk-based system has been stated in simple terms: It reduces mass incarceration, keeps communities safer by reducing crime while on bail, and improves the rate of appearance in court. Unfortunately, to conclude that any of these goals have actually been achieved in jurisdictions that have moved to it would be difficult, if not impossible, to establish. In fact, in a landmark report published last month, George Mason University law professor Megan Stevenson opened by stating that, “virtually nothing is known about how the implementation of risk assessment affects key outcomes: incarceration rates, crime, misconduct, or racial disparities.”
Specifically, in Stevenson’s research of over 1 million files, she concluded that the use of the risk assessment “did not result in any noticeable improvement in outcomes.” In fact, she added, it had “essentially no effect on releases, failures-to-appear, pretrial crime, or racial disparities in detention.” The vast amount of data she pored over included numerous years’ worth from Kentucky, which adopted the risk assessment algorithm from the Laura and John Arnold Foundation and positioned itself as a national leader in bail reform and pretrial release.
Despite her dismal findings, Professor Stevenson noted that the Arnold Foundation released its own study of the use of its risk assessment in various jurisdictions, and inexplicably touted its successes. Regarding Kentucky, the foundation released a report due to “external pressures,” which was ultimately removed from their website. Stevenson also pointed out an additional key concern with the Arnold Foundation’s suspect research — it was done on criminal cases during the pretrial phase, 83 percent of which were still open at the time their study was released.
While proponents continue to push the risk-based system, we would be wise to remember the words of former Supreme Court Justice Thurgood Marshall, a known skeptic of the use of risk computers to determine which individuals were “dangerous.” The federal government implemented the risk system in 1984 by virtue of passage of the Bail Reform Act of 1984. It was subsequently challenged from coast-to-coast, with the issue finally being upheld in U.S. v. Salerno.
In a scathing dissenting opinion, Marshall said, “Throughout the world today there are men, women, and children interned indefinitely, awaiting trials which may never come or which may be a mockery of the word, because their governments believe them to be ‘dangerous.'”
Today’s bail reformers are aiming for this same law for the states. The question remains before us today — as it did then — should we label people as “dangerous”? And if we do, can we do so fairly, ethically and constitutionally, and in a fashion that will deliver the results that serves our society? It is probably fair to say that the answer is becoming less clear with every passing day.
Jeff Clayton is the executive director of the American Bail Coalition. Clayton joined the American Bail Coalition as policy director in May 2015. He has worked in various capacities as a public policy and government relations professional for 15 years, and also as a licensed attorney for the past 14 years. He worked as the general counsel for the Professional Bail Agents of Colorado, in addition to serving other clients in legal, legislative and policy matters. Clayton spent six years in government service, representing the Colorado State Courts and Probation Department, the Colorado Department of Labor and Employment, and the United States Secretary of Transportation. He is also a prior presidential management fellow and finalist for the U.S. Supreme Court fellows program. Clayton holds a B.B.A. from Baylor University, an M.S. (Public Policy) from the University of Rochester, and a J.D. from the Sturm College of Law, University of Denver.
Laura and John Arnold Foundation’s “neutrality” on bail reform just a facade?
Facebook Comments