In advance of the 2022 World Cup, the use of biometric data in mass sports events has emerged as one of the key talking points.

This article is interesting because it illustrates the challenges with using biometric data alongside other technologies such as artificial intelligence, across many domains, including sporting events.

What are biometric technologies?

Biometric technologies refer to the use of technology to identify a person based on some aspect of their human biology - something that is unique to individuals and can tell individuals apart with a high degree of accuracy. These can include fingerprints, facial features, your iris, your gait, and your voice - all of which have been used as biometric data in applications. Such technologies that are able to recognise your biometric data offers individuals and organizations the confidence that only those individuals with the required authorisation can access sensitive information, provided the technology is accurate and the data is available.

What is it used for?

Two types of biometric applications have been identified: one-to-one; and one-to-many.

One-to-one applications involve the use of biometric data to grant (or deny) access to some system where only authorized persons are allowed access. A common example of one-to-one is using your thumb-print to access your specific smart phone or your online bank account. In the fingerprint example, once installed, the system should prevent other people from accessing your account, unless they have control of your thumb. (That does create a risk of kidnapping, which more sophisticated systems seek to preclude.) The use of biometric data in the form of fingerprints, faceprints, or voiceprints is becoming increasingly common as a form of secure authentication or identity and is increasingly widespread.

The benefits of using biometric technology for applications that aim to verify identify are numerous. They include speed (reducing operational costs by automating customer authentication processes) and accuracy, with the ability to identify and authenticate an individual within seconds. Using biometric technologies is usually safer and more secure than other forms of authentication due to the reliance on unique biological qualities, which are harder to forge or steal than documents, or passwords. Additionally, it is massively convenient, allowing for authentication of customers wherever they are on the planet, provided they have an internet connection. Most importantly, it reduces the potential for fraud, helping financial institutions and their customers.

On the other hand, biometric data is not always available. Not everyone has fingerprints, for example, and nor can everyone always speak. Similarly, biometric data such as voice may be difficult for a computer system to recognize. For example, most westerners outside the USA have had the experience of speaking with automated telephone voice interaction systems which fail to recognize accents from other countries.

A second type of application is known as one-to-many. In mass sporting events one application is to ensure that fans who are banned from sports events do not gain access to the stadium. Here, the biometric data of one target person is compared with a database containing biometric data for many people, in an attempt to find a match, and therefore identify the target person. Police authorities have long used fingerprints in this way, trying to match the fingerprints found at a crime scene to those contained in police database of prints. Initially, the attempted matching was done with paper databases and manual searches, but now electronic databases and computer matching are used. Increasingly, biometric data other than fingerprints are used in this way, for example, gait-recognition and facial-recognition applications in public security and anti-terrorist domains. Many of these one-to-many applications have been criticised, both on technical and invasion-of-privacy grounds.

The role of artificial intelligence and biometric data

Although one-to-one verification systems may use clever computer storage, retrieval, or processing algorithms, the main applications of AI with biometric data have been in one-to-many applications. For example, matching of target faces with a database of facial images may involve sophisticated AI machine learning systems to first identify matching components of the face, such as lips, ears, or patches of different skin colours. For a system to be able to accurately match images of people shown under different degrees or sources of light, or with a head held in slightly different positions, will also benefit from AI, and the leading commercial systems incorporate these techniques.

These AI systems have also come under some criticism. AI facial recognition systems, for instance, may use training data that was scraped from the web without consent of the image subjects or the image owners, or the training databases may contain inherent biases. In 2020, the then Australian Human Rights Commissioner, Mr Edward Santow, criticized leading facial recognition systems for using training datasets with insufficiently diverse participants, thereby creating a greater risk of inaccurate matching for women, people with non-white skins, and people with disabilities.

Further applications of facial images involve AI analysis of facial expressions, perhaps combined with analysis of body posture and physical movements, undertaken in order to infer individual emotional states or tendencies. Such facial analysis systems may seek, for example, to infer aggressiveness or emotional maturity, or even confidence and sincerity of the people whose facial images are recorded. These automated analyses of video interviews have been applied to recruitment and promotion processes. The AI technology used here is very immature, and it has been criticized for not being based on any underlying science.

Concerns around bias and misuse

Regulators and legislators are concerned around the increased use of biometric data and the role of artificial intelligence in such applications, with a move toward increased legislation and regulation to govern its safe and ethical use. Under existing laws, in Europe, the EU Data Protection Directive provides for stringent protection of personal data and sensitive personal data. In the UK, the UK Information Commissioner has also warned about the risks of using biometric data, in particular emotion analysis technologies, with biometric guidance due to be published in Spring 2023. The proposed new EU AI Act also recognises the risks posed with the potential misuse of biometric data and breach of fundamental human rights, with new safeguards embedded under the proposed legislation that prohibits certain uses of biometric data and categorises other uses as being high risk. There are stringent obligations under the proposed legislation that must be complied with to ensure its use is safe and ethical. In Australia, a team at the University of Technology, Sydney (UTS), have proposed a model law for the regulation of facial recognition and analysis AI systems.

As these examples illustrate, the use of biometric data is increasingly subject to regulation across the globe, and global organizations will need to keep abreast of these emerging legal and regulatory requirements, just as they do for new developments in innovative technologies.

With thanks to Professor Peter McBurney and Jonny Marshall for contributing to this post.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.