A research group revealed that it has discovered “serious shortcomings” in the predictive tools used by law enforcement agencies.
Law enforcement agencies have been using the tools to make decisions about pretrial detention, probation and sentencing, such as predicting whether someone will fail to appear in court based on their arrest history, demographic and more.
Set up in 2016 by companies including Google, Microsoft, Amazon and Facebook, The Partnership on AI released a report that stated algorithmic risk assessment tools cannot properly provide the right level of transparency and accountability.
“This report documents the serious shortcomings of risk assessment tools in the U.S. criminal justice system, most particularly in the context of pretrial detentions, though many of our observations also apply to their uses for other purposes such as probation and sentencing. Several jurisdictions have already passed legislation mandating the use of these tools, despite numerous deeply concerning problems and limitations. Gathering the views of the artificial intelligence and machine learning research community, PAI has outlined ten largely unfulfilled requirements that jurisdictions should weigh heavily and address before further use of risk assessment tools in the criminal justice system,” according to the researchers.
They were prompted to look into the tools after lawmakers in California introduced legislation that would allow the use of risk assessment tools in pretrial detention decision-making. In addition, the United States Attorney General has been ordered to create an “evidence based” risk assessment system by July 2019 to decide how long inmates should be incarcerated.
Yet Peter Eckersley, the partnership’s director of research, noted that the tools currently available were “not suitable for deciding to detain or continue to detain individuals.”
And it’s not just these researchers that believe the technology is flawed. Earlier this month, the Los Angeles Police Department got rid of its “chronic offender” database after concerns about inaccuracy, as well as an audit from the department’s inspector general that found “significant inconsistencies” in how the database was being used, whether people were informed they were on it and a general “lack of oversight.”
“The majority of people identified as chronic offenders had few, if any, actual contacts with the police,” it said, according to The Financial Times.
In addition, a study of a risk assessment tool to be used by the federal government found that only 35 percent of defendants classified as high-risk failed to appear for trial or were rearrested before trial. And a study of tools being used by police departments in Chicago, New Orleans and Maricopa County, Arizona, revealed that “dirty data” led to biased and inaccurate predictions.