Algorithmic Policing – "Trust Is A Must, Not A Nice To Have" Julian Hayes And Suzanne Gallagher Write For Global Data Review

BS
BCL Solicitors LLP

Contributor

BCL Solicitors is a law firm with a single-minded ambition – to achieve the best possible outcome for each and every client. We specialise in corporate and financial crime, regulatory enforcement and serious and general crime. We offer discreet, effective and expert advice to corporations, senior executives, public bodies and high-profile individuals.
BCL partner Julian Hayes and associate Suzanne Gallagher's article ‘Algorithmic policing – "Trust is a must, not a nice to have"‘ has been published by Global Data Review.
UK Technology
To print this article, all you need is to be registered or login on Mondaq.com.

BCL partner Julian Hayes and associate Suzanne Gallagher's article 'Algorithmic policing – "Trust is a must, not a nice to have"' has been published by Global Data Review.

Here's an extract from the article:

The dangers of idolising algorithms

From identifying missing individuals to solving cold cases by scanning old CCTV footage, the use of artificial intelligence by law enforcement is reaping benefits. However, its rapid development and deployment by police forces across the world is also causing unease.

Algorithms are not infallible, and are only as good as the dataset on which they were trained. Bias in training datasets will be carried through when algorithms are used in the real world. US studies have shown that some facial recognition algorithms can be up to 34% less accurate in recognising non-Caucasians than Caucasians. When tensions between the police and some ethnic groups are heightened, the risk of misidentification and miscarriages of justice, and the consequential erosion of confidence in law enforcement, is acute.

Likewise, the results of predictive crime mapping can turn into self-fulfilling prophecies with high levels of policing in perceived crime hot-spots simply identifying more offending than in neighbouring areas where crime levels are, in fact, similar. Concentrating police resources on such hot-spots can also lead to allegations of 'over-policing' and harassment, weakening community cohesion.

Finally, some algorithms claiming to predict individual offending are known to have taken into account questionable indicators of recidivism such as postcodes. In a similar vein, emotion recognition programmes are based on a scientists' crude interpretation of facial expressions which may not allow for cultural mores of which they themselves are ignorant. Nevertheless, predictive policing of this nature makes early intervention by social services or law enforcement possible and this will become more likely when police fear that not intervening might lead to catastrophe. Yet such interventions carry with them the significant risk of stigmatisation before any offence has even been committed.

International hotchpotch of legal and regulatory frameworks

As algorithmic policing has become more prevalent, legislators have scrambled to keep up and different approaches to it have been adopted.

At one extreme is the Chinese model where, for example, facial and emotion recognition technologies have been swiftly taken up and are now so pervasive that the country borders on a surveillance state, with minor offenders identified and punished and whole communities tracked and incarcerated. At the other extreme, some US cities such as Boston, Portland and San Francisco have banned law enforcement from using facial recognition technology entirely.

This article was originally published by Global Data Review on 14/05/2021. Read the full version on their website.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More