Low-Risk AI Bears High Stakes For Digital Health In New EU Regulation

At the heart of the new European regulation laying down harmonised rules on artificial intelligence (AI) is a focus on high-risk AI systems and their stringent governance.
European Union Food, Drugs, Healthcare, Life Sciences
To print this article, all you need is to be registered or login on Mondaq.com.

Life sciences providers of low- or medium-risk AI will also be subject to the EU AI Act

At the heart of the new European regulation laying down harmonised rules on artificial intelligence (AI) is a focus on high-risk AI systems and their stringent governance. However, despite this emphasis, there are crucial questions around AI products that qualify as "AI systems" but do not pose high-risk scenarios as defined within the regulation and their implications for the life sciences and healthcare sectors.

Examining low risk

While the regulation does not explicitly outline or discuss concepts such as low or medium risks, some AI systems that do not meet the high-risk threshold - and are not deemed unacceptable - remain subject to regulatory requirements.

An AI system may fall in this residual category; for example, where it is not captured by the high-risk AI system conditions, or where a derogation to a high-risk AI system qualification is explicitly provided by the regulation.

Examples of non-high risk applications in life sciences and healthcare are wide ranging. They may include AI-guided systems that optimise inventory management processes and ensure efficient supply-chain operations and reduce wastage of pharmaceutical products. They would also include AI-assisted tools for monitoring and ensuring compliance with regulatory requirements in manufacturing processes and for enhancing quality control and adherence to standards. They can also include AI-based predictive maintenance systems for medical devices to give advance warning of potential failures or maintenance needs.

AI algorithms integrated into electronic signature platforms used by pharmaceutical companies to verify the identity of individuals signing off on regulatory documents are a further example; as are AI-powered biometric authentication systems integrated into medical devices to verify the identity of healthcare professionals authorised to operate or access sensitive information. Some AI-based biometric verification solutions implemented in healthcare settings to identify patients before performing diagnostic tests or medical procedures may also be deemed non-high risk.

Lifestyle or wellbeing AI systems that do not pursue a medical purpose are also non-high risk. They typically avoid the medical device regulatory framework and, therefore, a potential high-risk classification. Non-medical AI-powered sleep-tracking devices monitoring users' sleep patterns, AI-driven symptoms trackers, or AI algorithms integrated into stress management tools that monitor users' stress levels by analysing their signals can fall in this category.

Specific to the medtech industry, class I medical devices qualifying as AI systems may also be considered as non-high-risk AI systems; for example, apps intended to support conception by calculating a user's fertility status based on an AI-aided algorithm.

Tailored scrutiny

Qualification assessments should be made on a case-by-case basis, considering the system's intended purpose, the regulation's "AI system" definition and risk categorisation principles.

The regulation mandates the European Commission to provide guidelines on derogations to the high-risk AI system qualification. These guidelines are intended to provide practical examples of uses of AI systems that are high-risk and not high-risk.

Notably, qualification as non-high risk is not irreversible, in particular within the ambit of market surveillance authorities' controls. If an authority considers that an AI system classified by the provider is in fact a high-risk one, it may carry out an evaluation. This evaluation may lead to an injunction for the provider to take actions and bring the AI system into compliance with the regulation. An authority's injunction may include other corrective actions (more on enforcement and monitoring will follow later in this series of Insights).

Regulatory frameworks

AI systems providers who consider their products as not high risk, on the basis of the regulation's conditions, remain bound by a number of obligations. This is relevant, for instance, for providers who can demonstrate that their AI system is not high risk because it does not materially influence the outcome of decision-making: it does not have an impact on the substance and thereby the outcome of this decision-making.

To ensure traceability, this type of AI system provider should assess the risk and draw up documentation of its assessment before the system is placed on the market or put into service. The documentation should be provided to national competent authorities upon request. The provider should also register itself and upload information about its AI system in the EU database established under the regulation.

Transparency requirements apply if AI systems are intended to interact directly with natural persons, which include employees, patients or healthcare providers and regardless of their risk categorisation.

The system must be designed and developed in a way that the individuals concerned are informed that they are interacting with an AI system. There is an exception to this obligation if AI involvement is obvious for an individual who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.

Deployers of deep-fake AI system are subject to comparable disclosure obligations. Deep-fake uses refer to when the AI system is used to generate or manipulate content; for example, an image or a video that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic.

Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content are also captured by the regulation, regardless of whether the systems are high risk or not. They must ensure that outputs are both marked in a machine-readable format and detectable as artificially generated or manipulated. This requirement does not apply if the system performs an assistive function for standard editing. It can also be avoided if the system does not substantially alter the input data provided by the deployer or its semantics.

Codes of conduct

In addition to the EU AI Act provisions, a substantial portion of the regulatory framework for non-high-risk AI systems is intended to be regulated through codes of conduct. These codes are meant to be developed by individual providers or deployers of AI systems, by organisations representing them; for example, trade unions or industry associations or by both. They may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.

Very much like codes of conduct in the pharmaceutical and medical device industry, the codes referred to in the EU AI act will be drawn up taking into account industry best practices, as required by the regulation.

Life sciences businesses are advised to monitor developments at EU level, including the Commission's AI Pact initiative and to reach out to their industry associations if they have joined one. While codes of conduct will apply on a voluntary basis, close attention should be paid to their enactment and the extent to which they will apply to pharmaceutical and medical devices companies. Notably, under the regulation, codes may concern the voluntary application of high-risk AI systems' requirements not only by providers but also by those using the system in a professional capacity, such as deployers.

The Commission will evaluate the impact and effectiveness of voluntary codes of conduct to foster the application of high-risk requirements to lower-risk AI systems. This will occur for the first time four years from the date of entry into force of the EU AI Act.

Safety nets

Horizontal product safety legislation such as the EU general product safety regulation (GPSR), which has recently been revised to incorporate digital elements, remains applicable to lower-risk AI systems, whose providers and deployers are not bound by high-risk AI systems requirements.

The EU AI Act recognises the importance of ensuring that AI systems related to products that are not high risk are safe when placed on the market or put into service, with the GPSR acting as a safety net for those products.

Essential safety regulations for consumer products under EU legislation may escape notice among life sciences businesses. Notably, pharmaceuticals fall beyond the GPSR's reach. Because they are subject to a pre-market assessment that includes a specific risk-benefit analysis, these products are excluded from its scope. Nevertheless, involvement in supplying low-risk AI systems to patients necessitates familiarity with these regulations.

Osborne Clarke's comment

Providers of AI that meets the regulation's definition of an AI system but is not categorised as high risk are encouraged to prepare for the regulation's new obligations, which span multiple aspects including - as applicable - traceability, transparency, documentation or even technical design.

In view of these requirements, AI screening and risk mapping is crucial for life sciences businesses active in digital health. While the regulation does not explicitly discuss concepts such as "low" or "medium" risks, it does set out a specific regulatory framework for some of those systems, even if they do not pose high risks. To streamline compliance, the adoption of upcoming AI codes of conduct - including those which will be of specific impact to the life sciences sector - and updates to the EU horizontal product safety legislation should be monitored.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More