Many American courts are using an opaque computer algorithm to help judges decide whether a person should be granted bail or parole, and the length of their sentence. However, some have pointed to the apparent racial bias against African-Americans in such algorithms and have questioned their effectiveness in predicting recidivism (reoffending).

Courts increasingly using AI algorithms for predicting recidivism

One such artificial intelligence algorithm is called COMPAS, short for Correctional Offender Management Profiling for Alternative Sanctions.

Over the past 10 years it has been used in more than a million court hearings from California to New York to predict whether a person facing the court will commit another crime if they are released on bail or parole.

Judges in some US states are also using COMPAS to decide the length and severity of sentences they hand down. This is based on the algorithm predicting the likelihood of recidivism and assessing the convicted person's danger to society.

Process and code of algorithm used for predicting recidivism kept secret

It works like this: defendants fill in a questionnaire about their personal history, job, education and family, to which their criminal record is added.

The algorithm scores them from one to ten based on 137 factors to indicate their potential to reoffend.

The code and processes underlying COMPAS are secret, known only to the algorithm's maker, US company Northpointe (now Equivant). Judges, prosecutors and defence lawyers do not know how COMPAS reaches its conclusions, but many US states have ruled that judges can use it to help make decisions on bail, parole and sentencing.

Claims algorithm no better than humans at predicting recidivism

Some studies have found the algorithm is no better at predicting criminal recidivism than humans. (Please see Sentence by numbers: the scary truth behind risk assessment algorithms, Center for Digital Ethics and Policy, 7 May 2018.)

There are also accusations that COMPAS is biased against African-Americans.

In 2016 ProPublica compared the outcome of two risk assessments in Florida. In one case, an 18-year-old black woman who had taken a child's bicycle was judged by the algorithm to be at "high risk" of reoffending, while a 41-year-old white man, a seasoned criminal who had been convicted of armed robbery, was assessed as "low risk".

The algorithm got both assessments wrong: two years later the white man got eight years for burglary, while the black woman had not been charged with any new crimes.

In another case in Wisconsin, the court prosecution and defence agreed a plea deal for a year in jail, with follow-up supervision, for a black man found guilty of stealing a lawnmower and some tools.

However, the judge said COMPAS predicted a high risk of future violent crime, overturned the plea deal and imposed a two year prison sentence followed by three years of supervision instead. (Please see Machine bias, ProPublica, 23 May 2016.)

Australian research on computerised screening for predicting recidivism

Australian academic research into risk assessment has examined COMPAS and other forms of computerised predictions of criminal behaviour.

This includes research by the NSW Bureau of Crime Statistics and Research on risk assessment of offenders. (Please see Improving the efficiency and effectiveness of the risk/needs assessment process for community-based offenders, NSW Bureau of Crime Statistics and Research, December 2011.)

In response to the increasing use of AI in different jurisdictions, the Australasian Institute of Judicial Administration Incorporated recently released a report to provide a guide for courts and tribunals of the different AI and automated decision-making tools available, as well as the opportunities and challenges they bring. (Please see AI Decision-Making and the Courts: A guide for Judges, Tribunal Members and Court Administrators, June 2022.)

Future of AI in Australian courts for predicting recidivism

As artificial intelligence (AI) is becoming increasingly popular in courts internationally, it raises the question of its place in Australian courts. (Please see AI is creeping into our courts. Should we be concerned? - Business News Australia, 28 September 2022.)

While NSW judges must make difficult decisions on whether to grant bail and on the length of sentences to protect society, they also have a duty to consider justice.

In my opinion, it is not possible to remove the human element from sentencing (for example, the nuances involved in submissions on sentence) and other procedures, without undermining the basis of the current legal system.

Bail and parole are similarly exercises where AI may in fact be a belated attempt to catch up to and exploit the existing databases available to lawyers and the judiciary.

The AI input seems to be a method of "averaging", more suited to some of the online betting outlets massively expanding in Australia, than to the judicial system.

It would be interesting to see if the use of algorithms in the criminal justice system in the US has any beneficial outcome whatsoever over time.

Juries are the ultimate leveller in our legal system, and the mere availability of an alternative system - such as the algorithms underpinning the many wonderful social media sites that now exist - should not replace the jury without qualitative outcome data.

Using a secret computer algorithm to decide whether to grant bail or how long a person should be in jail flies against the legal ethics and moral principles of our courts.

While technology may be able to help in risk assessment, it should never be left up to a computer algorithm to decide a person's fate.

John Gooley
Criminal law
Stacks Collins Thompson

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.