In recent years, the use of remote biometric identification cameras in real-time has gained significant relevance, particularly in enhancing public safety. This type of technology, based on cameras and artificial intelligence (AI) systems, enables real-time monitoring and analysis of public and private spaces to identify potential risks and suspicious activities, providing authorities with the ability to respond quickly and effectively to emergencies. From airports to large events, remote biometrics have been implemented as a tool to improve surveillance and facilitate the management of large crowds.
A recent case of these technology's application occurred during the Paris Olympic Games in August 2024. This event marked a milestone due to the innovative deployment of security measures displayed. The organizers of the Paris Olympics faced significant security challenges, not only due to the volume of fans and tourists but also in the context of prior terrorist threats in the French capital. Given this situation, the French government had to implement unprecedented measures, including the experimental use of AI systems to enhance security during the Olympics and similar events through 2025.
These AI-equipped security cameras allow, among other functions, the detection of individuals or vehicles entering restricted areas, abandoned luggage in train or subway stations, or measuring the flow or concentration of people in areas or at times deemed unusual.
Amid this context, experts and lawyers questioned whether this technology involved the processing of personal data and whether such processing could violate citizens' rights. This concern arises because these cameras inevitably process biometric data, as they record individuals' bodies and faces passing through an area. Additionally, to detect "unusual situations," the system creates profiles of each person based on their movements, behaviour, gait, etc., potentially infringing privacy.
The organizers of the Olympics and Wintics, the French company that developed the AI, insisted that the cameras installed were not equipped with facial recognition systems and did not process personal data. According to the company, the software only identified shapes and silhouettes of individuals or objects and based on that, detected anomalies. Moreover, they argued that all decision-making processes underwent prior human supervision.
Despite these assertions, concerns and debates around the use of these mass surveillance systems have exponentially increased following the recent approval of the European Union's Artificial Intelligence Act (hereinafter, "AI Act"). This is because the EU's stance on biometric identification systems under the AI Act is clear: they constitute an infringement on privacy and fundamental human rights.
In fact, real-time remote biometric identification systems in publicly accessible spaces are considered, except in exceptional cases, prohibited. Article 5.1(h) of the AI Act states that the use of such real-time biometric identification systems in public spaces would only be permitted under the following circumstances:
- Selective search for specific victims of certain crimes (abduction, human trafficking, or sexual exploitation) and missing persons.
- Prevention of a specific, substantial and imminent threat to the life or safety of natural persons or a genuine and present or foreseeable threat of a terrorist attack.
- Locating or identifying a person suspected of having committed a criminal offence for investigation or prosecution purposes or executing a penalty.
Considering this, the surveillance systems used in Paris might be close to being classified as prohibited under the AI Act. Thus, the first step is to determine whether these technologies indeed involve biometric identification as defined by the AI Act. Article 3 of this legal text defines biometric identification as "the automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database."
Therefore, it would be necessary to analyse whether the system truly compared the biometric data it collected with other biometric data stored in a database to identify individuals. If so, we could likely discuss real-time biometric identification.
In cases where real-time biometric identification exists, the question arises of whether its use during the Olympics falls within one of the exceptional circumstances outlined in Article 5.1(h) of the AI Act. The most straightforward justification in this case appears to be the prevention of threats, although it remains to be determined whether hosting the Olympics constitutes a "specific, substantial and imminent" threat to individuals' lives and safety.
It is also crucial to consider that the AI Act not only distinguishes based on the risk level of the systems but also focuses on who uses them and for what purposes. For instance, military or national security uses fall outside the AI Act scope, whereas other public uses may be subject to specific rules or exceptions. If a company or individual uses an AI system, the AI Act likely applies to them; however, if a public authority uses it, as in the case of the Olympics, the applicability of the AI Act becomes more uncertain.
As observed, determining whether the AI Act applies in each case, as well as classifying AI systems by their risk level, is far from straightforward. Doing so correctly is essential to establish not only the system's legality but also the obligations that the entities involved in the AI system's supply chain must fulfil.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.