A recent post on the ICO's AI Auditing Framework blog explores human bias and discrimination in AI systems, together with some of the technical and organisational measures which can be implemented to mitigate the legal risks associated with these issues.

What causes bias and discrimination in AI?

Assuming that the underlying algorithms are not inherently unbalanced, the reason that some AI systems exhibit bias and/or discriminatory behaviour may be that the data used to train the AI is in some way already biased and/or discriminatory.

The blog post uses a hypothetical scenario to illustrate this, of an AI system used to calculate credit risks for a bank, which is giving women lower credit scores than men. The blog explains that the likely reasons for this are either 1) that male borrowers are over-represented in the training data, which will skew the performance of the AI in favour of male borrowers, or 2) that the data set used for training reflects past discrimination (if female borrowers were previously rejected more than male borrowers, as a result of direct discrimination on the basis of gender, or indirect discrimination associated with 'proxy variables' – for example, if borrowers were previously rejected on the basis of their occupation, and certain occupations are traditionally held by more females than males).

What are the legal risks associated with bias and discrimination in AI?

The UK anti-discrimination legislative framework, including the UK Equality Act 2010, protects individuals from discrimination (by both humans and automated systems). This framework is supplemented by the GDPR, which specifically notes at Recital 71 that data controllers should take measures to prevent 'discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation, or processing that results in measures having such an effect'.

As such, controllers using AI systems to process data or make decisions about individuals must take steps to prevent discriminatory behaviour in such systems to avoid breaching the legislation, just as controllers must take steps to prevent discriminatory behaviour among their human processors and decision makers.

Controllers should also take into account other risks associated with concerns about bias and discrimination in AI, including reputational and commercial risks. By way of example, earlier this year a group of Amazon shareholders sought to prevent the company from selling its facial recognition technology, Rekognition, because of concerns around bias.

What technical and organisational measures can be taken to mitigate these risks?

The most appropriate measures will depend on the organisation in question, the personal data being processed, and the purpose of the AI system itself. For example, certain organisations may be more likely to process protected characteristics (such as gender and race), and some AI systems may be more at risk of exhibiting bias or discrimination (such as those used to make predictions or decisions about individuals). However, the ICO blog outlines some general considerations which should be taken into account.

In relation to technical measures, if the bias or discrimination is caused by an unbalanced data set, the simplest measures which can be taken to resolve this are those which effectively 're-balance' the data, by adding or removing from the data set. Alternatively, organisations can choose to implement technical measures within the AI system itself, such as training the system differently, or modifying the system post-training to achieve algorithmic "fairness".

In relation to organisational measures, the blog post advises that organisations implementing AI systems should document their approach to mitigating bias and discrimination from the outset, so that appropriate safeguards and technical measures can be put into place during the design and build phase. A Data Protection Impact Assessment (DPIA) will also likely be required. The blog post also advises that organisations should establish clear policies and good practices for procuring high-quality training and test data, and "satisfy themselves that the data is representative of the population the [AI] system will be applied to". The individuals accountable for the organisation's compliance with applicable data protection and anti-discrimination law will also need a sufficient understanding of the risks and mitigation strategies.

Summary

With the increasing prevalence of AI systems, particularly those used for making automated decisions about individuals, it is increasingly important for organisations to be aware of the risks associated with bias and discriminatory behaviour in AI systems. The most appropriate technical and organisational measures for mitigating these risks will be situation-dependent, but organisations implementing such systems should, at a minimum, consider undertaking a DPIA, maximise the integrity of data sets used for training, and ensure that the system is tested and monitored for unbalanced behaviour. The blog post also notes that, whilst not a legal requirement, having a diverse workforce may be a powerful tool to manage bias and discrimination in AI systems, and suggests that undertaking an exercise to identify and prevent bias and discrimination in AI systems may also provide an opportunity for organisations to uncover and address any existing discriminatory practices generally.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.