ARTICLE
3 June 2025

US-Based AI Developer Fined €5 Million For GDPR Violations: Key Takeaways

RJ
Roth Jackson

Contributor

Roth Jackson and Marashlian & Donahue’s strategic alliance delivers premier regulatory, litigation,and transactional counsel in telecommunications, privacy, and AI—guiding global technology innovators with forward-thinking strategies that anticipate risk, support growth, and navigate complex government investigations and litigation challenges.
Italy's data protection authority (the Garante) recently imposed a €5 million fine on Luka, Inc., the US-based developer of the emotional AI companion chatbot Replika...
United States Privacy

Italy's data protection authority (the Garante) recently imposed a €5 million fine on Luka, Inc., the US-based developer of the emotional AI companion chatbot Replika, for violations of the General Data Protection Regulation (GDPR). The Garante also initiated a new investigation into the methods used to train the underlying AI model. This enforcement action highlights the increasing scrutiny by regulators on AI models that interact directly with individuals and process sensitive personal data, particularly those designed to simulate emotionally responsive relationships.

Background: Emotional AI Companions

Emotional AI companions, such as Replika, are artificial intelligence systems engineered to simulate emotional relationships with users. These platforms use natural language processing, sentiment analysis, and behavioral prediction to adapt responses and foster emotional bonds. While they may offer therapeutic or social support benefits, especially for users hesitant to seek traditional mental health services, they also pose significant ethical, psychological, and regulatory risks—particularly when processing sensitive or behavioral data.

Key Deficiencies Identified by the Garante

The Garante's decision identified several critical deficiencies in Replika's operations:

  • Lack of a Valid Legal Basis for Data Processing (Article 6 GDPR):
    Replika processed personal data without a proper legal basis. The company failed to obtain valid consent or establish another lawful ground for processing, and users were not sufficiently informed to provide informed consent.
  • Deficiencies in Transparency and Information (Articles 12 – 14 GDPR):
    Privacy notices and information practices were insufficient. Users were not clearly informed about the types of data collected, processing purposes, legal basis, or data recipients, undermining transparency and informed decision-making.
  • Potential Exposure of Minors to Inappropriate Content:
    Despite claims that Replika was intended for users aged 18 and over, there were no effective age-verification mechanisms. The chatbot reportedly engaged in explicitly suggestive or emotionally manipulative conversations, posing risks to minors' mental well-being.
  • Inadequate Safeguards for Sensitive or Behavioral Data:
    The app encouraged users to disclose sensitive thoughts and emotions, but lacked sufficient safeguards to protect this data or ensure processing was necessary and proportionate.

Broader Regulatory Trends

The Garante's action reflects a broader trend among data protection authorities to scrutinize AI models, especially those processing sensitive or behavioral data. The United States is also responding: New York, for example, recently introduced a bill seeking to regulate AI companion models, mandating safety features, periodic reminders that users are interacting with AI, and protocols for detecting and responding to self-harm or suicidal ideation.

Best Practices for AI Developers and Deployers

  • Assess Legal and Regulatory Risks:
    Assess and address legal risks in all jurisdictions where your service is deployed, especially for AI tools that collect sensitive user data.
  • Establish a Clear Legal Basis for Processing Personal Data:
    If relying on consent, ensure it is specific, informed, freely given, and withdrawable at any time.
  • Review and Enhance Privacy Notices and Disclosures:
    Update user interfaces, onboarding flows, and privacy notices to clearly communicate data collection, use, access controls, system limitations, and user rights.
  • Limit and Secure Sensitive Data:
    Avoid processing unnecessary sensitive data. Use technical safeguards such as data minimization, access controls, encryption, and automated deletion where feasible.

Conclusion

The Garante's enforcement action against Replika serves as a clear warning to AI developers and deployers worldwide. Companies must proactively address regulatory expectations, enhance transparency, and implement robust safeguards, especially for services targeting vulnerable populations or processing sensitive data. Failure to do so may result in significant financial penalties and reputational harm.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More