NEW: Landmark Report on Criminal Risk Algorithms Demonstrates Why CO House Bill 1226 is Fatally Flawed

NEW: Landmark Report on Criminal Risk Algorithms Demonstrates Why CO House Bill 1226 is Fatally Flawed

On Friday, a landmark research report was issued by the Partnership on AI (Artificial Intelligence) on risk assessments in criminal justice according to an article on Venturebeat.com entitled Algorithms aren’t ready to automate pretrial bail hearings.  The Partnership of AI is comprised over 80 national groups and companies, including Accenture, ACLU, Berkman Klein Center at Harvard University, Google, IBM, Samsung, Sony, etc.  The report calls for ending risk assessments in criminal justice unless, and, if used not until the risk assessment algorithms can meet minimum standards.  House Bill 1226, which is slated to be heard in the Senate State, Veterans & Military Affairs Committee would do just that by expanding risk assessment algorithms statewide and being run on defendants within 24-hours and forcing judges to consider them.

The Partnership report makes two things clear as we apply it to House Bill 1226:

  1. The algorithm process is probably fatally flawed and for that reason a statewide expansion and codification of this movement is not appropriate; or,
  2. House Bill 1226 fails to meet all of the ten recommendations made in the report and therefore must be substantially amended or shelved for a later time when the appropriate changes can be made

On point one, the report found that none of the tools currently in use (including the existing Colorado tool (CPAT), which the report noted it specifically considered) meet basic requirements of fairness:

“Using risk assessment tools to make fair decisions about human liberty would require solving deep ethical, technical, and statistical challenges, including ensuring that the tools are designed and built to mitigate bias at both the model and data layers, and that proper protocols are in place to promote transparency and accountability. The tools currently available and under consideration for widespread use suffer from several of these failures, as outlined within this document.”

Later in the report, the Project discusses the two proposed solutions to this quandary, the first of which is that jurisdictions simply stop using risk assessment tools:

“One approach is for jurisdictions to cease using the tools in decisions to detain individuals until they can be shown to have overcome the numerous validity, bias, transparency, procedural, and governance problems that currently beset them. This path need not slow the overall process of criminal justice reform. In fact, several advocacy groups have proposed alternative reforms that do not introduce the same concerns as risk assessment tools.”

In other words, the tools must first meet the basic requirements recommended by the report, or they should not be used.

The second solution is to make the tools meet the requirements rather than outright ban them:

“Another option is to embark on the project of trying to improve risk assessment tools. That would necessitate procurement of sufficiently extensive and representative data, development and evaluation of reweighting methods, and ensuring that risk assessment tools are subject to open, independent research and scrutiny. The ten requirements outlined in this report represent a minimum standard for developers and policymakers attempting to align their risk assessment tools—and how they are used in practice—with well-founded policy objectives.”

So, on point two, let’s examine whether House Bill 1226 sets up an architecture to deal with these issues.  The short answer is the legislation completely fails in this respect.  Like a batter batting 0 for 10, it’s time to send this bill back to the minors.

Recommendation #1. FAIL: the existing algorithm being used in Colorado has not been re-validated since 2013.  Currently, the University of Northern Colorado is attempting to validate a new risk assessment, but that will not occur for another year.

Recommendation #2. FAIL: the legislation does not require that the tool meet a particular standard in terms of controlling for bias and instead only that it be “evaluated and validated” to “minimize bias.”  Best practices would be to require the selection of particular standard and ensure that the tool meet the standard as the report notes.  The report notes there are several standards and also "this is not only a technical question but also a question of law, policy, and ethics.”  Under this language, a report will be presented, but there is never a particular requirement that a risk assessment tool meet a certain identifiable standard, and that if it fails to it can no longer be used (i.e., if it is determined beyond doubt that the tool is specifically and identifiably biased against someone in a particular protected class).

Second problem here is that this legislation does not encompass protection from bias for all protected classes by sweeping in only “race, ethnicity and gender.”  Federal protected classes include:

  • Race
  • Color
  • Religion or creed
  • National origin or ancestry
  • Sex
  • Age
  • Physical or mental disability
  • Veteran status
  • Genetic information
  • Citizenship

Also, sexual orientation discrimination is a protected class for many purposes in Colorado as a result of the passage of Senate Bill 08-208 and is similarly excluded from the list.

Third, it is important to point out that CCJJ sent the bill over to the House and failed to put any language in the introduced legislation that said word-one on the topic of bias.  They ignored it, and despite the million hours of government employees locked in a dusty non-descript room to fix the system, they somehow missed that more than 100 national civil rights groups last summer called for an end to risk assessments unless questions of validity and racial bias were addressed at the very time they were recommending it.  For CCJJ, it’s no worries: we are going to recommend that a currently invalid assessment that has not been tested for racial or any other bias be expanded statewide.  This simply borders on absurdity at this point.

Recommendation #3.  FAIL: the legislation does not require a separate prediction for distinct events, and will continue to allow the conflation of two or more distinct events (failing in some respect while on release, i.e., failure to appear, new crime, etc.).  The legislation specifically requires that risk should be reported as risk of “pretrial failure.”  That conflates different outcomes and predicts them equally: example, a person skipped court for a ski trip versus a person committed a new violent felony.

Recommendation #4. FAIL: the report specifically notes that the Colorado Pretrial Risk Assessment Tool (CPAT) that is currently used by some counties fails to meet the fourth requirement: “there are also substantial gaps between the intuitive and the correct interpretations of risk categories in Colorado’s Pretrial Assessment Tool.”

Recommendation #5. FAIL: the current CPAT tool does not produce confidence estimates.  In other words, the tool must provide a range at degrees of statistical significance.  For example, a person is classified in a group that is 60% high risk, what is the confidence interval?  50-70%?  40-80%?  House Bill 1226 in fact specifically requires that persons be classified based on the “predicted level of pretrial failure.”

Recommendation #6. FAIL: the legislation requires that training be given to local pretrial services programs but not to “judges, attorneys, and court employees.”  In addition, the report demands specific training which is not part of House Bill 1226: “These trainings should address the considerable limitations of the assessment, error rates, interpretation of scores, and how to challenge or appeal the risk classification. It should likely include basic training on how to understand confidence intervals.”  This is also important because tools without basic due process warnings (“written advisement listing the limitations”), which the Colorado tool does not currently employ, may violate the due process clause of the 14th Amendment to the U.S. Constitution pursuant to State v. Loomis.

Recommendation #7. FAIL: There is no stated goal of what the tool is going to do.  In fact, in Colorado, it is not known who or how the risk cut points where created.  We have indicated this violates the due process clause because, in effect, it creates a board that does not have to follow the open meetings or records laws and sets consequences of committing particular conduct that are not disclosed to persons who may commit such conduct.  Compounding the problem, is that the legislature in House Bill 1226 is delegating substantial power to the judicial branch, which is exempt from the open meeting and records laws and follows its own policies.

Recommendation #8. FAIL: The current CPAT tool lacks sunshine and transparency.  We have asked for the data at least 10 times, only to be told it is “not in the public interest” or that it was derived from the FBI’s crime files.  We encouraged the Colorado House to follow the lead of Idaho’s House Bill 118 and bring some sunshine and transparency to the process, but that request fell on deaf ears.  In fact, the sponsor of that bill, Rep. Greg Chaney, over the weekend took to the airwaves to recommend that Colorado slow the train on this bill and make reasonable amendments to it

Recommendation #9. FAIL: This legislation fails to meet the basic requirements of this section, which is to facilitate the ability of defendants to challenge this information by creating accessibility policies and also an “audit trail,” that will allow for further testing.

Recommendation #10. FAIL: House Bill 1226 makes the State Court Administrator the architect, builder, and inspector of this new technological pretrial justice system.  He picks the algorithm, he sets the policies, and then he comes in and audits himself.   Instead, this recommendation calls for an independent board to supervise the oversight, which 1226 is lacking.  Second, the recommendation notes that audits need to be localized: “Such review processes must also be localized because the conditions of crime, law enforcement response, and culture among judges and clerks are all local phenomena.”  In addition, national best practices require local validation of statewide tools, and House Bill 1226 does not require local validation.

House Bill 1226 fails to meet any of the ten requirements and legislators should oppose House Bill 1226.  To suggest these programs should be expanded statewide and power ceded to the State Court Administrator to fix it is nothing more than airy hope and wishful thinking.


Facebook Comments