Negotiated with the Member States in December 2023, the EU Artificial Intelligence (AI) Act, the regulation laying down harmonised rules on the transformative technology, gained approval from the European Parliament in March 2024.
The EU AI Act awaits final adoption and endorsement by the Council of the EU, which is expected before the end of the legislature with a corrigendum procedure. After publication in the Official Journal of the European Union, it is expected to enter into force gradually over a 24-month period.
The regulation aims to safeguard fundamental rights, democracy, the rule of law and environmental sustainability against high-risk AI, while fostering innovation and positioning the EU as a leader in this field. It establishes significant obligations for AI and its operators, based on potential risks and impact levels.
The EU AI Act is intended as a cross-sector piece of legislation, but its impact on life sciences and healthcare is meaningful and will be significant.
AI Act and life sciences
Life sciences businesses may come under the EU AI Act if their portfolio includes technologies dependent on AI or AI-driven digital workflows.
Telemonitoring tools, digital therapeutics, healthcare robots, patients' wearables, healthcare providers' prescription software, medical chatbots and various algorithms used in care centres are just a few examples of applications that often rely on AI to achieve their functionality.
However, AI does not have to be incorporated in a technology for the regulation to apply. The EU AI Act regulates non-embedded AI which serves the functionality of a product without physical integration.
It is also common for companies to utilise AI before either launching a pharmaceutical product or obtaining a CE ("Conformité Européene") marking for a medical device, or in support of these products.
Uses across the product and service cycle
Where AI systems drive research and development and production processes in the life sciences, the EU AI Act will need to be considered. For example, in the discovery phase of life science research, AI systems are increasingly used for the identification and validation of target molecules.
AI tools can benefit clinical investigations for medical devices, performance studies on in vitro diagnostics and trials of medicines to help manage the number of patients and their recruitment. It can also optimise the design of clinical studies, while AI elements in 3D printing, adaptive manufacturing and predictive maintenance can accelerate the production phase of life sciences products. Quality controls are also enhanced by using AI-powered sensors in production plants.
And, across all life sciences industries, the use of AI is on the rise in the post-marketing phase of product development.
AI can assist in collecting, sorting, triaging or centralising the reporting of adverse events, serious incidents, complaints or reports from patients or healthcare professionals. Similarly, AI-supported tools are proliferating to facilitate post-safety update reports, post-market clinical follow-up, trend reporting and other post-market surveillance activities that life sciences companies must carry out.
Organisational support
Beyond the lifecycle of regulated products or services, life sciences businesses are harnessing AI on a daily basis to benefit patients and the healthcare community. Human resource (HR) professionals in organisations can deploy AI tools to recruit and screen talent and automate time-consuming task. This can free up HR professionals to focus on strategic decisions.
In the procurement of life sciences products, industry professionals use AI to manage supplier risks and digitalise data entry or invoice processing. In-house legal professionals are increasingly relying on AI-driven contract management tools, on digital platforms to deliver advice to their peers and on legal research software.
In essence, AI is everywhere and the EU AI Act's blueprint spans numerous business units and multiple operations within pharma and medtech organisations.
Threefold regulatory approach
The notion of an AI system in the EU AI Act is broadly construed and follows a threefold approach. The definition is closely aligned with the work of international organisations working on AI to facilitate international convergence.
Firstly, an AI system is a machine-based system designed to operate with varying levels of autonomy. The term "machine based" refers to the fact that AI systems run on machines.
The system may exhibit adaptiveness after deployment. The adaptiveness that an AI system could exhibit after deployment, refers to self-learning capabilities, allowing the system to change while in use.
Thirdly, for explicit or implicit objectives, the system infers, from the input it receives, how to generate outputs (such as predictions, content, recommendations or decisions) that can influence physical or virtual environments. This capability to infer refers to a capability of AI systems to derive models or algorithms from inputs or data. The capacity of an AI system to infer transcends basic data processing and enables learning, reasoning or modelling.
General-purpose AI
The EU AI Act also regulates general-purpose AI, across two separate categories: general-purpose AI models and general-purpose systems.
The concept of general-purpose AI models covers any AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. It excludes AI models that are used for research, development or prototyping activities before they are released on the market.
The general-purpose AI models category is clearly set apart from the notion of "AI systems" to enable legal certainty. These models are typically trained on large amounts of data, through various methods, such as self-supervised, unsupervised or reinforcement learning. The definition is based on the key functional characteristics of a general-purpose AI model. Such characteristics includes the generality and the capability to competently perform a wide range of distinct tasks.
General-purpose AI systems are based on a general-purpose AI model and have the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.
When a general-purpose AI model is integrated into or forms part of an AI system, this should be considered to be general-purpose AI system when, due to this integration, it has the capability to serve a variety of purposes. A general-purpose AI system can be used directly or it may be integrated into other AI systems.
Union law synergy
The EU AI Act's rules on placing on the market, putting into service and using or deploying high-risk AI systems are laid down consistently with the European Union's new legislative framework (NLF) adopted in 2008.
The new AI regulation follows the same NLF horizontal principles as so-called "industrial goods", such as construction products, personal protective equipment, radio equipment, medical devices, gas appliances, measuring instruments and many other technologies that must bear the CE-marking of conformity in accordance with EU laws.
In a nutshell, the EU's approach towards regulating AI is to consider AI systems as any other industrial products, while introducing new and AI-specific obligations that are crafted around its digital nature and address its potential unpredictability.
This position suggests a simultaneous and complementary application of various EU legislative acts.
Osborne Clarke comment
The EU AI Act aims to cover a broad range of industries but its effect on life sciences and healthcare holds particular weight.
But the combination of regulatory instruments may prove challenging for life sciences technologies that currently follow existing EU NLF regulations; for example, applicable machinery, medical devices, or low-voltage regulations. Some of these will soon be captured by the upcoming EU AI Act, as they contain one or more high-risk AI systems.
The regulation intends to minimise the burden on operators and avoid any possible duplication. For high-risk AI systems related to products which are covered by existing EU NLF legislation, the compliance of those AI systems with the requirements of the EU AI act would be assessed as part of the conformity assessment already provided for in the applicable NLF law. The true test will lie in whether coordinating these assessments, including with conformity assessment bodies, becomes a seamless endeavour.
The EU AI Act is without prejudice to existing Union law – in particular on data and consumer protection, fundamental rights, employment and protection of workers, and product safety – to which the regulation is complementary.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.