On 4 February 2025, the European Commission (EC) issued guidelines on prohibited artificial intelligence (AI) practices established by the EU AI Act (the "Guidelines").
Background
The Guidelines which are currently non-binding, provide clarity on prohibited AI practices that pose a risk to the values and fundamental rights of the European Union, such as ensuring public security, the right to privacy, and the right to non-discrimination. The purpose of this two-part briefing series is to assist privacy professionals and other operators of AI systems in understanding the implications of the Guidelines. Part one looks at the first four use cases under Article 5(1)(a) to 5(1)(d) inclusive. Part two of this briefing series will look at the remainder of the Guidelines on prohibited use cases under Article 5(1)(e) to Article 5(1)(h), plus additional key takeaways.
Interim period – Enforcement
The provisions relating to prohibited AI systems came into force on 2 February 2025, with penalties applicable from 2 August 2025, leaving a six-month interim period in relation to enforcement. During this interim period, notwithstanding that domestic market surveillance authorities are not yet identified, the provisions on prohibited AI systems are mandatory.
Prohibited AI practices
Article 5 AI Act contains the list of AI use cases which are expressly prohibited. The tables below set out the use cases, the background and rationale behind the prohibition, as well as highlighting permissible exceptions and examples of use cases which are out of scope. The tables do not constitute an exhaustive list, and specific advice should be sought if a use case is likely to fall within Article 5. The Guidelines break out each component of the Article 5 subsections into a set of cumulative conditions, all of which must be met in order to apply.
Manipulation and deception Article 5(1)(a)
Article 5(1)(a) contains a ban on AI that uses subliminal techniques or manipulative strategies to significantly distort individuals' behaviour, altering their decision-making abilities, in a manner that is reasonably likely to cause significant harm.
Article 5 use case: Manipulation and deception (Cumulative conditions, all of which must be met in order for the prohibition to apply) |
Background and Insights | Exceptions/Out of Scope |
---|---|---|
The practice must constitute the 'placing on
the market', the 'putting into service', or the
'use' of an AI system.
Deployment of subliminal (beyond a person's consciousness), purposefully manipulative or deceptive techniques. With the objective or the effect of materially distorting the behaviour of a person or a group of persons. The distortion must appreciably impair their ability to make an informed decision, resulting in a decision that the person or the group of persons would not have otherwise made. The distorted behaviour must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons. |
Examples of subliminal techniques include visual
subliminal messages (quickly flashed too fast for the conscious
mind to register), auditory subliminal messages, subvisual and
subaudible cueing, embedded images, misdirection and temporal
manipulation.
The Guidelines provide that "significant harm" includes physical psychological, financial and economic harms. The concept of "significant harm" is "nuanced and context dependent" noting a number of key considerations to take into account when assessing this within meaning of Article 5(1)(a) AI Act namely: severity of the harm; context and cumulative effects; scale and intensity; affected persons vulnerability and duration and reversibility. |
Lawful persuasion, for example, personalised recommendations based on transparent algorithms and user preferences. In contrast subliminal clues (using imperceptible imagery – something that influences consumers' purchasing decisions without their conscious awareness) would constitute manipulation. |
Harmful exploitation of vulnerabilities Article 5(1)(b)
Article 5(1)(b) contains a ban on using AI to exploit vulnerabilities related to age, disability, or socio-economic conditions with the objective or effect of materially distorting that person's behaviour, in a manner that causes, or is reasonably likely to cause harm.
Crucially both Article 5(1)(a) and (b) apply to not just the purpose or "objective" of the use case, but any systems that have the "effect of" distorting behaviour or in the case of (b) causing or reasonably likely to cause significant harm – therefore intention is not required.
Article 5 use case: Harmful exploitation of
vulnerabilities (Cumulative conditions, all of which must be met in order for the prohibition to apply) |
Background and Insights | Exceptions/Out of Scope |
---|---|---|
The practice must constitute the 'placing on
the market', the 'putting into service', or the
'use' of an AI system.
The AI system must exploit vulnerabilities due to age, disability, or socio-economic situations. The exploitation enabled by the AI system must have the objective, or the effect of materially distorting the behaviour of a person or a group of persons. The distorted behaviour must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons. |
"Vulnerabilities" encompasses a
broad spectrum of categories including "cognitive,
emotional, physical, and other forms of susceptibility"
which impacts an individual or a group to make informed decisions,
or otherwise influences their behaviour.
"Exploitation" should be understood as the use of such vulnerabilities in a manner that is harmful for the exploited group, clearly distinguished from lawful practices. Examples include AI powered toys designed to encourage risky behaviour in children, or addictive dopamine loops or reinforcement schedules. It excludes AI enabled toys, games, learning applications that generally bring benefits and are not affected if they do not meet all the criteria of the prohibition. |
AI Systems that are not likely to cause
significant harm include:
AI companionship systems designed to make users more engaged, without manipulative or deceptive practices present AI systems which recommend music, avoiding exposure to depressive songs AI systems which use subliminal techniques to encourage users to make healthy choices, for example smoking cessation AI systems which simulate phishing attempts to educate users on cybersecurity threats Advertising techniques that use AI to personalise content based on user preferences are not inherently manipulative if they comply with the prohibitions in Article 5 AI Act and the relevant obligations under the GDPR, consumer protection law and Regulation (EU) 2022/2065 2065 (Digital Services Act). AI systems for providing banking services such as mortgages and loans, which use age or socio-economic status of the client as an input where they are designed to protect and support vulnerable groups due to their age and status. |
Social scoring Article 5(1)(c)
Article 5(1)(c) contains a ban on using AI to categorise individuals based on their social, personal, or professional behaviour, or personal(ity) characteristics, when this results in unjustified or detrimental treatment.
Article 5 use case: Social scoring (Cumulative conditions, all of which must be met in order for the prohibition to apply) |
Background and Insights | Exceptions/Out of Scope |
---|---|---|
The practice must constitute the 'placing on
the market', the 'putting into service' or the
'use' of an AI system.
The AI system must be intended or used for the evaluation or
classification of natural persons or groups of persons over a
certain period of time based on: known, inferred or predicted personal or personality characteristics. The social score must lead or be capable of leading to the detrimental or unfavourable treatment of persons or groups in one or more of the following scenarios: (a) in social contexts unrelated to those in which the data was originally generated or collected; and/or (b) treatment that is unjustified or disproportionate to their social behaviour or its gravity. |
Profiling – Following the decision in the
SCHUFA case (Case C-634/21), profiling of natural persons, where
this is conducted through AI systems, may be covered through this
prohibition, as it is captured within the concept of
"evaluation", provided the other conditions for the
prohibition are fulfilled.
The phrasing "over a certain period of time" is indicative that assessment should not be limited to once-off gradings or ratings from a very specific individual context. This should not be used to circumvent the prohibition. Social behaviour is broad and includes "actions, behaviour, habits, interactions within society, etc., and usually covers behaviour related data points from multiple sources". It can cover private contexts and business contexts such as the payment of debts. "Personal characteristics" may or may not involve social behaviour, for example performance at work, economic situation, financial liquidity, health, personal preferences, interests, reliability, behaviour, location or movement, level of debt, type of car etc. Personality characteristics should be interpreted in the same way but includes individuals profiles and may imply judgment of other persons or AI systems. "Known, inferred or predicted characteristics" have implications for the fairness of the scoring practice in question. There must be a causal link between the social score and the treatment of the evaluated person or group of persons. It is not enough that the credit score results in detrimental or
unfavourable treatment. This might meet the GDPR criterion for
automated processing but under Article 5(1)(c) the treatment must
either be (i) in unrelated social contexts or (ii) disproportionate
to the gravity of social behaviour. The Guidelines recognise at
paragraph 164 that "many AI-enabled scoring and evaluation
practices may not fulfil them [either of these criteria]
and therefore be outside the scope of the prohibition. In
particular, this may not be the case where the AI-enabled scoring
practices are for a specific legitimate evaluation purpose and
comply with applicable Union and national laws that specify the
data considered as relevant for the purposes of evaluation and
ensure that the detrimental or unfavourable treatment is justified
and proportionate to the social behaviour". |
Financial credit scoring systems used by creditors
or agencies to assess a customer's financial creditworthiness
or outstanding debts, providing a credit score or determining their
creditworthiness assessment, which are based on the customer's
income and expenses and other financial and economic circumstances,
providing they are relevant for credit scoring purposes and broadly
comply with consumer protection laws.
Telematics data collected to show unsafe driving used by insurers to offer telematics-based tariffs, provided any pricing increases are proportionate to the risky behaviour of the driver. Loyalty systems in online shopping AI-enabled scoring by an online shopping platform which offers privileges to strong buyers with a low rate of product returns. |
Prediction of criminal offences Article 5(1)(d)
Article 5(1)(d) contains a ban on AI systems that assess the likelihood of committing offences, solely based on automated profiling or personal characteristics.
Article 5 use case: Prediction of criminal
offences (Cumulative conditions, all of which must be met in order for the prohibition to apply) |
Background and Insights | Exceptions/Out of Scope |
---|---|---|
The practice must constitute the 'placing on
the market', 'the putting into service for this specific
purpose' or the 'use' of an AI system.
The AI system must make risk assessments that assess or predict the risk of a natural person committing a criminal offence. The risk assessment or the prediction must be based solely on either, or both, of the following: (a) the profiling of a natural person, (b) assessing a natural person's personality traits and characteristics. |
Notably, the Guidelines advise that the
prohibition does not ban risk assessment and crime prediction
practices outright. It specifically applies to AI systems used for
making risk assessments to assess or predict the risk of an
individual committing a criminal offence, provided they are basing
this "solely" on profiling and/or assessing
personality traits and characteristics. This includes group
profiling, which involves creating descriptive profiles for
categories of criminal offenders and using AI to apply a group
profile to make an assessment or prediction.
The Guidelines also provide that the term 'solely' allows for the inclusion of other elements in the risk assessment, e.g., something not exclusively based on profiling, or personality traits. To avoid circumvention, these additional elements must be real, substantial, and meaningful and pre-established objectives and verifiable facts may justify that conclusion. |
The text of the AI Act itself provides for an
exclusion for AI systems supporting human assessments, based on
objective and verifiable facts directly linked to a criminal
activity. Where the system falls within this exclusion and is not
banned, the Guidelines confirm it will be classified as a high-risk
AI system pursuant to Annex III, point 6(d), AI Act.
Where private parties are using AI systems risk assessments in their own businesses to protect their own private interests, the Guidelines note that while those risk assessments may relate to the risk of criminal offences being committed as a purely accidental and secondary circumstance, this is not deemed to be covered by the prohibition. Location-based or geospatial crime predictions, crime predictions in relation to legal entities, and predictions in relation to administrative offences are out of scope. |
A reminder of the AI Act's implementation timeline is below
By August 2025
- Obligations go into effect for providers of general-purpose AI models.
- Designation of national supervisory authorities ("at least one" market surveillance authority and "at least one" notifying authority).
- Member States are required to have implemented rules on penalties and other enforcement measures.
By August 2026
- Commencement of compliance obligations for high-risk AI systems in Article 6(2) (AI systems referred to in Annex III).
By August 2027
- Compliance obligations for high-risk AI systems covered by Article 6(1) go into effect (AI products, or AI as safety components of a product).
- Deadline for compliance of general-purpose AI models placed on the market before 2 August 2025.
Watch our video series on the EU AI Act.
The authors would like to thanks Jennifer Floyd for her contribution to the article.
This article contains a general summary of developments and is not a complete or definitive statement of the law. Specific legal advice should be obtained where appropriate.