ARTICLE
21 May 2025

Banned AI: What The EU AI Act Means For Gaming And Gambling Systems

Let's not sugar-coat it, Article 5 of the EU AI Act is not a compliance suggestion. It's a red line.
European Union Media, Telecoms, IT, Entertainment

Banned AI

Let's not sugar-coat it, Article 5 of the EU AI Act is not a compliance suggestion. It's a red line. A clear and binding list of artificial intelligence practices that are so inherently harmful, the law doesn't even try to regulate them. It simply bans them. For the gaming and gambling sectors, industries built on personalisation, behavioural incentives, and player segmentation, this isn't just legal noise. It's a direct shot across the bow. Some practices that were once considered clever engagement strategies are now squarely in the "prohibited" category.

And if you operate in this space, this means one thing: it's time to audit not just what your AI does, but what it enables as the die has been cast.

Let's focus on three specific limbs in Article 5 that are particularly relevant to this space. Article 5 of the AI Act bans AI systems that:

  1. Manipulate or deceive individuals in ways that impair autonomy or cause harm;
  2. Exploit vulnerabilities related to age, disability, or socio-economic status;
  3. Apply social scoring to classify people and disadvantage them across contexts.

These prohibitions are already applicable at the EU level, and once the other normative parts of the AI Act kick in, even the penalties will with the highest tier of penalties up to €35 million or 7% of global annual turnover.

(A)Manipulative or Deceptive AI – Article 5(1)(a)

This provision bans AI that distorts behaviour using subliminal, manipulative, or deceptive techniques that impair autonomy and cause or are likely to cause harm. It doesn't matter if the manipulation is clever or commercially successful, if it interferes with rational decision-making, the line has been crossed. Examples that clearly breach this provision:

  • Slot machines that trigger AI-driven "near miss" animations, not randomly, but precisely when a player's emotional fatigue or loss aversion peaks. When these animations are calibrated to simulate "almost winning" based on player psychology and past losses, they don't just create engagement however they manufacture compulsion. And if harm follows, it's a textbook violation.
  • Loot boxes that quietly alter drop probabilities in response to behavioural signals, for example, reducing the chance of rare items just after a large spend or increasing visible teases of valuable rewards when players show signs of frustration or near-exit behaviour. These are no longer game mechanics; they're adaptive persuasion engines.
  • Reward loops built on variable-ratio reinforcement, where unpredictable, intermittent wins are used to keep players engaged beyond their intention. If the AI system detects a user's susceptibility to intermittent dopamine feedback and escalates exposure during vulnerable moments, it's manipulative by design.
  • Conversational agents or chatbots that change tone, frequency, or call-to-action timing, based on the player's emotional cues. For instance, bots that sense emotional lows (through text input or gameplay data) and then intensify nudging toward deposits, re-bets, or bonus claims, preying on behavioral pressure.

(B)Exploiting Vulnerabilities – Article 5(1)(b)

This clause bans AI that exploits people's vulnerabilities, particularly those tied to age, disability, or socio-economic conditions. The law doesn't require intent. If the system targets those vulnerabilities, and harm follows, it's likely non-compliant, like for example the following:

  • Children's games where AI dynamically adjusts difficulty, introducing punishing spikes when the system detects the child is most engaged but only solvable through in-app purchases. If the AI system uses real-time play data to manufacture frustration at critical points and then offers microtransactions as the only relief, it's not gamification - it's monetised coercion.
  • Betting platforms using payday prediction algorithms to identify when users are likely to receive income and then time personalised promotions, offers, or nudges during those windows. If a system tracks salary deposit patterns or benefit disbursement cycles to push deposits, that's not targeted marketing - it's exploitative timing.
  • User interfaces designed to make deposits simple but withdrawals convoluted, especially for older users or those with cognitive challenges. Think two clicks to deposit, but five menus to withdraw with smaller fonts, longer wait times, and poor accessibility. If that experience is driven or personalised by AI, it becomes an age-related barrier by design.
  • AI models that infer financial distress from user behaviour, such as using outdated smartphones, playing late at night, or having irregular deposit patterns and then push high-risk bet suggestions or match bonuses in response. Even if socio-economic status is inferred indirectly, the resulting targeting is direct and legally sensitive.

(C) Social Scoring – Article 5(1)(c)

Article 5(1)(c) prohibits AI systems that assign behavioural or personal "scores" to individuals and use them to restrict access or apply disadvantages in unrelated contexts. Let me clarify that, this isn't about rating performance in a game but it's about systems that surrepiously monitors users across detached environments and quietly limit rights or access based on hidden classifications. These are some common, but now likely unlawful, examples:

  • Using player behaviour in one game to penalise them in another such as tagging a player as "high risk" based on long sessions or frequent losses in Game A, and then reducing bonus eligibility or fast-track withdrawal access in Game B. If that decision isn't explained, consented to, or linked to a lawful operational need, it's illegal social scoring.
  • Assigning internal risk tiers to players, which are used to delay withdrawals, restrict account functionality, or silently deprioritise support. If those scores are based on inferred behavioural value e.g., how "profitable" a player is or how often they chase losses, it's not just discriminatory. It strips users of fair and transparent treatment.
  • AI-driven suppression of features based on invisible scoring systems like for example, players who are tagged as "low value" being automatically excluded from promotional content, denied customer support escalation, or algorithmically dissuaded from withdrawing. If they can't see the score or change the outcome, the system becomes a black box and under the AI Act, that's a red flag.

Gaming and gambling operators should ask themselves three hard questions:

  • Is our AI system influencing behaviour in ways the user can't reasonably detect?
  • Are we tailoring experiences based on vulnerability signals?
  • Are we scoring users and changing their treatment in ways they can't see or challenge?

If the answer to any of those is "yes," then this isn't just a compliance matter. It's a question of legality. Under the AI Act, it's not just usage that matters, even design alone can trigger liability. However it is also about honesty and integrity, pushing innovation and your business by preserving human dignity and autonomy and showing that the industry is a mature industry which is worth trusting.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More