Partnership on AI: Algorithms aren’t ready to automate pretrial bail hearings

Partnership on AI: Algorithms aren’t ready to automate pretrial bail hearings

(Excerpt from Venturebeat - April 26 2019)

The Partnership on AI released its first-ever research report today, which declares algorithms now in use unfit to automate the pre-trial bail process or label some people as high risk and detain them, while declaring others low risk and fit for release and sending them home.

Validity, data sampling bias, and bias in statistical predictions were called out as issues in currently available risk assessment tools. Human-computer interface issues and unclear definitions of high risk and low risk were also considered important shortcomings in those tools.

The Partnership on AI is an organization created in 2016 that attempts to join the biggest names in AI like Amazon, Google, Facebook, and Nvidia together with Amnesty International, ACLU, EFF, and Human Rights Watch.

Education, news, and multinational organizations like the United Nations are also member organizations. Created by Apple, Amazon, Google, and Facebook, more than half of the group’s 80 current member organizations are nonprofits.

PAI recommends policymakers either avoid using algorithms entirely for decision-making surrounding incarceration, or find ways to meet minimum data collection and transparency standards laid out in the report.

The report was motivated by the passage of the First Step Act passed by Congress and signed into law last year by President Trump, as well as California’s SB 10, legislation that uses algorithms to get rid of the state’s cash bail system that will be on the ballot in 2020.

Both bills were seen as part of a broader national issue for criminal justice reform advocates, but the Partnership on AI says such tools can have an adverse, significant impact on millions of lives.

The release of the report is one of the first public and declarative actions by the Partnership since its founding. A focus on criminal justice reform may seem like a left turn for an organization created by the biggest AI companies in the world.

Issues like AI bias by tech giants and the sale of facial recognition software by companies like Microsoft and Amazon seem to have attracted more headlines in recent months. However, Partnership on AI researcher Alice Xiang says the report’s focus on risk assessment algorithms by judges for pretrial bail was a conscious decision.

“There have already been a lot of concerns about algorithmic bias in various contexts, but criminal justice is really the one where these questions are the most crucial, since we’re actually talking about making decisions about individuals’ liberty, and that can have huge ramifications for the rest of their lives and the lives of others in their communities,” Xiang told VentureBeat in an interview. “Part of our reason for choosing criminal justice for this initial report is that we do think it is really the best example of why fairness is very important to consider in the context of really any use of AI to make important life decisions for individuals, and especially when the government is making those decisions, because then it’s something where issues of transparency too are important to facilitate public discourse.”

continue reading...

Facebook Comments