The EU Regulation on Artificial Intelligence (the "AI Act") is the world's first comprehensive legislation aimed at regulating the development and use of AI. It uses a risk-based approach, classifying AI systems by risk levels and focusing on their intended use. Although the AI Act is generally set to take effect on 2 August 2026, a first set of provisions (general provisions and prohibited practices) is applicable starting from 2 February 2025. A next set of provisions will become applicable per 2 August 2025.
Definition of an AI system, at the core of regulation
AI systems are defined as (i) machine-based systems, (ii) designed to operate with varying levels of autonomy, (iii) that may exhibit adaptiveness after deployment, and (iv) that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Scope of application
The AI Act applies to different actors in the chain of AI systems:
Providers placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are established or located within the EU or in a third country.
|
Deployers of AI systems that have their place of establishment or are located within the EU.
|
Providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the EU. |
Importers and distributors of AI systems.
|
Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark; |
Authorised representatives of providers that are not established in the EU; |
Affected persons that are located in the EU. |
It does not apply to purely personal/private use of AI systems.
Gradual entry into force
The AI Act is generally set to apply per 2 August 2026.
A first set of provisions (general provisions and prohibited practices – see below) is however applicable per 2 February 2025.
A next set of provisions will become applicable per 2 August 2025. Other provisions will apply either per 2 August 2026 or 2 August 2027.
What changes per 2 February 2025?
i. Prohibited AI practices
In response to the unacceptable risks posed by certain AI
systems, the AI Act lists a number of "prohibited
practices" which must be removed from the market and/or
discontinued per 2 February 2025.
Unlike other provisions of the AI Act, this requirement applies
universally to all operators, irrespective of their role or
identity. Consequently, it extends to various activities, including
the placing on the market, putting into service or use of AI
systems that, for example :
- Infer emotions on the workplace or in education institutions (except where this is intended for medical or safety reasons);
- Employ subliminal, manipulative or deceptive techniques;
- Exploit vulnerabilities of individuals based on age, disability or specific social or economic situation;
- Facilitate social scoring by both public and private actors;
- Create or expand facial recognition databases through indiscriminate scraping of facial images from the internet or CCTV footage;
- Categorise individuals based on biometric data to infer sensitive information such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
- Assess or predict the likelihood of an individual committing a criminal offense solely on the basis of profiling or personality assessments.
ii. Employee awareness
Starting 2 February 2025, employers qualifying as providers
and/or deployers of AI systems must ensure their staff - and anyone
working with AI systems on their behalf - possess a sufficient
level of AI literacy.
While the legislation does not provide explicit guidelines or
further directives, AI literacy can be understood as the
requirement for all staff to possess a sufficient level of
education in the comprehension, application, oversight, and
critical evaluation of AI applications. It is essential to consider
the technical expertise, experience, education, and training of the
individuals involved, as well as the specific context in which the
AI systems will be deployed, and the impact on the individuals or
groups affected by these systems.
Key strategies to enhance AI literacy within your organisation
are:
- Identify the skills and knowledge required to support AI initiatives.
- Evaluate existing AI knowledge, skills, and identify any gaps.
- Develop tailored training programs, at various levels. This should include foundational AI literacy for all staff (covering AI policy, ethics, and security), as well as advanced training for technical roles. Ensure that learnings are ongoing and evolve with the latest advancements in AI; and
- Developing external partnerships with legal, ethical, and technical AI experts to stay informed on industry standards and emerging best practice.
Which sanctions for continuing to use prohibited AI systems ?
- Fines of up to 7% of annual global (group) turnover, or
- Fixed amounts between EUR 7.5 million and EUR 35 million, depending on the violation and business type.
The provisions on fines will become applicable per 2 August 2025.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.