- with readers working within the Media & Information and Pharmaceuticals & BioTech industries
The EU Commission has published its proposal for the "Digital Omnibus" aimed to simplify and streamline the EU rules governing artificial intelligence, data protection, cybersecurity, and data use more broadly. The proposal seeks to amend several cornerstone EU regulations, including Regulation (EU) 2016/679 (GDPR), Regulation (EU) 2024/1689 (AI Act), Regulation (EU) 2023/2854 (Data Act), Directive 2002/58/EC (e-Privacy Directive) and Directive (EU) 2022/2555 (NIS2). The proposal also foresees the repeal of the fairly recent Regulation (EU) 2022/868 (Data Governance Act).
Below is a high-level snapshot of the proposal, ahead of a more detailed advisory we will publish.
The proposal will now moves through what is expected to be a challenging legislative procedure and policy and political discussions with the European Parliament and the Council.
Below we set out a quick overview of the most relevant elements for companies, including medical device manufacturers and other Life Sciences companies – e.g., changes to the AI Act, updates to the GDPR, reform of the EU cookie and tracking rules, data-sharing rules, and the new single-entry point for cybersecurity and data protection incidents reporting.
Digital Omnibus package: what is in the box?
The European Commission has released two separate legislative proposals:
- The "Digital Omnibus on AI", a targeted set of amendments to the AI Act, intended to address perceived practical implementation challenges, ensure consistency with existing EU product legislation, and adjust certain transition/implementation timelines.
- The broader "Digital Omnibus", a horizontal proposal amending the GDPR, Data Act, e-Privacy Directive, NIS2 Directive, and other instruments, with the objective of simplifying overlapping obligations and improving coherence across the EU's data protection, cybersecurity and other digital legislations.
AI Act
- Clearer interplay between AI Act and other EU legislation
A key element of the proposal is the clarification that AI Act requirements should be applied within the existing conformity assessment frameworks set out in EU product legislation such as Regulation (EU) 2017/745 (MDR), and Regulation (EU) 2017/746 (IVDR). In practice, this means that manufacturers of AI-enabled medical devices will not be required to undergo two separate certification processes. Instead, compliance with the MDR/IVDR and the AI Act will be assessed together through a single, integrated conformity assessment procedure.
- Single application and assessment for notified bodies
The proposal provides that Member States will allow conformity assessment bodies to submit one single application and undergo one single assessment when seeking designation under both the AI Act and the EU product legislation listed in Annex I (such as the MDR and IVDR). In practice, this means that notified bodies will no longer need to complete two separate designation processes. Instead, they can be evaluated once for both. Hopefully, this will accelerate the availability of notified bodies qualified to certify high-risk AI systems.
- Broader real-world testing opportunities
The proposal expands opportunities to test high-risk AI systems in real-world conditions. Under the current framework, real-world testing is mainly available for the Annex III use cases. The amendment would extend this possibility to AI systems covered by Annex I product legislation, including medical devices and in vitro diagnostics. This expansion could be significant for companies developing safety-critical or clinically-relevant AI models as it could provide a clearer legal footing for performance validation, data collection, and iterative improvement in real operational environments, while still operating within the regulatory framework.
- Timelines and grace period
The proposal introduces a new mechanism to adjust the application of the AI Act's high-risk obligations. These requirements will no longer automatically apply on 2 August 2026. Instead, their start date will be tied to the moment when the Commission confirms that the necessary support tools (such as harmonised standards, common specifications, or Commission guidelines) are available. Once that confirmation is issued, the obligations will apply six months later for Annex III high-risk systems, and twelve months later for Annex I systems integrated into regulated products, such as medical devices. This flexibility is limited by long-stop dates: Annex III systems must comply no later than 2 December 2027, and Annex I systems no later than 2 August 2028, even if the supporting measures are delayed. This approach acknowledges the practical reality that companies cannot operationalise Chapter III requirements until the underlying standards are in place.
In addition, to ease the transition into the AI Act, the proposal clarifies that if at least one unit of a high-risk AI system has been lawfully placed on the EU market before the relevant cut-off date, all other units of the same type and model may continue to be placed on the market without new certification, provided the design remains unchanged. Any significant design modification, however, will trigger full compliance with the AI Act, including conformity assessment.
GDPR
- Clarifying when pseudonymised data is not personal data
The proposal introduces an important clarification to the GDPR: information is not personal data for a given organisation if that organisation cannot identify the individual to whom the information relates, taking into account the means "reasonably likely to be used by that entity". This aligns closely with the recent judgment of the Court of Justice of the European Union (CJEU) (for more information please check our blog post here), which confirmed that pseudonymised data must not be regarded as constituting, in all cases and for every person, personal data. According to the CJEU, pseudonymisation may, depending on the circumstances and factual context, effectively prevent a party receiving the pseudonymised data from identifying a specific individual and, therefore, such data would no longer be personal data for this party.
For life sciences companies routinely working with key-coded clinical data, adverse event reports, and research datasets, this combined legislative and judicial shift is significant. It moves EU law away from the rigid assumption that all pseudonymised data is personal data in all circumstances, and toward a more contextual, risk-based interpretation. In practice, this could reduce regulatory burdens when companies handle pseudonymised data.
- Mechanism to give better legal clarity on anonymisation and pseudonymisation techniques
To improve legal certainty, the proposal empowers the Commission, together with the European Data Protection Board, to issue guidance on when pseudonymised data should be treated as non-personal data. This includes specifying technical criteria, state-of-the-art methods, and risk-based indicators for assessing re-identification. For life sciences organisations working with multi-centre clinical datasets, safety reports, or real-world evidence, this would offer much-needed clarity on how to structure key-coding and pseudonymisation processes in a way that reduces the GDPR compliance burden.
- A more workable definition of scientific research
The proposal formally defines scientific research as activities that contribute to the expansion of general knowledge, including technological development, demonstration work, and novel applications of existing insights. Crucially, an activity does not cease to qualify as scientific research simply because it is conducted with commercial objectives, a point of particular relevance for life sciences companies engaged in industry-funded clinical studies and product-development projects.
- The processing of personal data for the development and operation of AI
The proposal introduces an explicit exemption allowing the residual processing of special categories of personal data where this occurs incidentally during the development or operation of AI systems and cannot reasonably be prevented. This reflects a pragmatic recognition that AI training datasets may inadvertently contain sensitive personal data, even when such data is not deliberately collected. According to the proposal, to address this risk, organisations must implement robust technical and organisational measures to detect and remove such data.
- The exercise of the individual's right of access
The proposal seeks to address growing concerns about the misuse of data-subject access requests, particularly where requests are made for non-data-protection purposes. Data controllers would be able to refuse such requests or charge a reasonable fee where they can demonstrate that a request is manifestly excessive or the mechanism for the exercise of the GDPR rights is being misused.
- Harmonised EU-level lists for DPIA requirements
The proposal aims to replace today's patchwork of national DPIA lists with an EU-wide set of processing activities that either do or do not require a data protection impact assessment. For organisations operating in multiple Member States, this would remove a persistent source of uncertainty and compliance friction, as the threshold for "high-risk" processing currently varies significantly across jurisdictions. A single, harmonised list would make it easier to plan and document DPIA obligations, particularly for companies running cross-border clinical trials or digital health services, and should reduce the risk of divergent expectations from different data protection authorities.
ePrivacy Directive
The proposal targets what the European Commission is calling as "longstanding compliance challenges" of the ePrivacy Directive's "cookie rules," which have, according to the Commission, generated both operational burden for businesses and widespread "consent fatigue" among users. The proposal updates the rules on storing or accessing information on users' devices and aims to reduce the situations in which consent is required, leading to fewer cookie banners overall. At the same time, when consent is needed, users must be able to accept or refuse it with a single, clear choice, and controllers must avoid dark-pattern designs.
Data Act
The proposal also amends the Data Act by giving data holders an additional ground to refuse access requests involving trade secrets. Beyond the current test, which allows refusal where disclosure would likely cause serious economic harm, data holders would be able to decline access where there is a high risk that the trade secrets could be unlawfully accessed, used, or disclosed in jurisdictions with inadequate or weakly enforced legal protections. This includes situations where a third country's rules appear robust on paper but lack meaningful enforcement in practice. For companies operating internationally, particularly those handling sensitive technical or commercial data, this change provides a more realistic basis to protect confidential know-how against disclosure risks outside the EU.
Incident reporting
The proposal introduces a single EU-level entry point for cybersecurity incident notifications, addressing the current situation in which organisations must often report the same incident to multiple authorities under different laws. The new mechanism, operated by ENISA, would route incident reports to the competent authorities under regimes of various legislations such as NIS2, and the GDPR. For companies in highly regulated sectors including medical devices, digital health, and pharmaceuticals, this consolidation should reduce the compliance and administrative burden and facilitate reporting.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.