- within Technology topic(s)
- with Inhouse Counsel
- in United States
- with readers working within the Technology industries
November 2025 – As Europe moves ahead with the AI Act, the European Commission has made clear that there will be “no stop the clock, no grace period, and no pause”, despite calls from major tech players such as Google, Meta, Mistral, and ASML to delay implementation. The timeline remains firm: core provisions apply from February 2025, general-purpose model rules from August 2025, and high-risk AI obligations from August 2026.
The message is clear: the Commission intends to keep its
ambitious AI timeline on track, while it looks to streamline other
digital obligations for companies. Against this backdrop, the GDPR
may also be entering a new chapter. Nearly seven years after its
adoption, a leaked draft of the Digital Omnibus package shows the
Commission's intent to update the framework for the era of
AI-driven data processing.
1. A shifting definition of privacy
The leaked draft suggests amendments that could substantially reshape the data-protection landscape:
- a narrower definition of personal data, potentially excluding pseudonymised identifiers or other data that does not directly identify a person;
- reduced safeguards for sensitive data categories (e.g., health or political beliefs) when such data does not directly reveal those characteristics;
- broader permissions for remote access to personal data on devices such as smartphones or computers, possibly without explicit consent; and
- “legitimate interest” explicitly recognised as a lawful basis for the training and operating AI systems.
While for many organisations, the proposed changes could simplify compliance obligations and clarify how AI-related data processing fits into the GDPR framework, they also raise questions about consistency and enforcement across Member States, signalling that the EU debate is far from over.
2. Why it matters
If adopted, the entire compliance landscape could shift:
- AI-driven decision-making (recruitment, performance evaluation, customer profiling) may increasingly rely on “legitimate interest”;
- datasets currently considered “personal” may fall outside the GDPR scope, altering how companies document and justify data processing;
- narrower definitions might make compliance easier on paper yet raise reputational risks when algorithmic bias or discrimination occurs.
3. Staying ahead of the curve
To prepare for this evolving landscape, companies should:
- map all AI-related data uses (training, testing, deployment) and assess whether these involve personal or pseudonymised data;
- (re-)evaluate lawful bases for AI-driven processing, documenting “legitimate interest” assessments where applicable;
- update internal policies and transparency notices to include clear explanations of AI use and human oversight;
- train HR, compliance, and IT teams on upcoming changes, particularly how GDPR, the AI Act and national laws interact;
- monitor EU and national developments to anticipate new reporting or accountability requirements.
Even as compliance frameworks evolve, accountability remains non-negotiable. The next competitive edge will come from integrating AI innovation within a culture of transparency, fairness, and accountability.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.