As the heady clamour of this Summer's exam results algorithm fiasco fades into the darkening evenings of UK Winter, the recent proposals for an “Accountability for Algorithms Act” by the Institute for the Future of Work (IFOW) are pretty timely, to say the least.

The proposals, supported by an op-ed in The Times by David Davis MP, aim for “an overarching, principles-driven approach to put people at the heart of developing and taking responsibility for AI, ensuring it is designed and used in the public interest.”1

For observers in the UK this is an interesting development. 2020 has seen a flurry of paperwork on AI regulation and some interesting debate, particularly from the European Commission's AI White Paper in February and the subsequent consultation. But the UK, having bigger fish to fry, has seemed much less involved.

The proposed Act, detailed in Part 5 of the IFOW's “Mind the Gap” report,2 “would regulate significant algorithmically-assisted decision-making, which met a risk-based threshold, across the innovation cycle, legal spheres and operational domains in the public interest”. An “umbrella, ‘hybrid Act'”, it would help guide and align the existing regulatory ecosystem, the current law, and decisions taken by the makers of algorithms.

A number of proposed statutory duties are given top billing:

  1. A duty on actors developing and/or deploying algorithms, as well as other key actors, to undertake an algorithmic impact assessment, including an evaluation of equality impacts, or a dedicated equality impact assessment.
  2. A duty on actors developing and/or deploying algorithmic systems, as well as other key actors, to make adjustments which are reasonable in the circumstances of the case, with regard to the results of the equality impact assessment.
  3. A duty for actors across the design cycle and supply chain to co-operate in order to give effect to these duties.
  4. A duty to have regard, while making strategic decisions, to the desirability of reducing inequalities of income resulting from socio-economic and also place based (‘postcode') disadvantage.

Proposals are also made around increasing transparency in the innovation cycle and support for collective accountability (rights for unions and workers vis a vis algorithmic systems involving AI used at work).

In terms of regulatory supervision, the IFOW isn't proposing a new regulator – instead the Act would “establish an intersectional regulatory forum to coordinate, drive and align the work of our regulators, and enforce our new duties, which would otherwise lie between the EHRC [the Equality and Human Rights Commission] and the ICO.”

The IFOW is clear that the proposals “need very wide consultation” – i.e. we are at a very early stage – but there appears to be some parliamentary support here. How much governmental and legislative bandwidth the proposals will get, given the competing pressures of COVID-19 and Brexit planning, is clearly another matter.

1  Davis, David, “Proper laws on AI could prevent more algorithm fiascos”, The Times, 28 October, 2020,  https://www.thetimes.co.uk/article/proper-laws-on-ai-could-prevent-more-algorithm-fiascos-fsgmm2fv0.

2  Institute for the Future of Work, “Mind the Gap: how to fill the equality and AI accountability gap in an automated world”, 26 October, 2020,  https://uploads-ssl.webflow.com/5f57d40eb1c2ef22d8a8ca7e/5f9850d5410374c05fdc9a84_IFOW-ETF-Report-(v7-27.10.20).pdf.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.