ARTICLE
1 July 2022
Mondaq Thought Leadership Award Winner

Medical & In Vitro Diagnostic Devices And The AI Act: Proposed Regulatory Requirements

WF
William Fry

Contributor

William Fry is a leading full-service Irish law firm with over 310 legal and tax professionals and 460 staff. The firm's client-focused service combines technical excellence with commercial awareness and a practical, constructive approach to business issues. The firm advices leading domestic and international corporations, financial institutions and government organisations. It regularly acts on complex, multi-jurisdictional transactions and commercial disputes.
Medical and diagnostic devices are becoming increasingly advanced, and many are beginning to incorporate cutting-edge Artificial Intelligence (AI) systems into their technology.
Ireland Food, Drugs, Healthcare, Life Sciences

Medical and diagnostic devices are becoming increasingly advanced, and many are beginning to incorporate cutting-edge Artificial Intelligence (AI) systems into their technology. The already complex regulatory framework is set to become even more complicated with the introduction of the proposed AI Act, which has specific requirements concerning certain systems and regulations, including the Medical Devices Regulation (Regulation (EU) 2017/745) (MDR) and the In-Vitro Diagnostic Devices Regulation (Regulation (EU) 2017/746) (IVDR).

The AI Act

The AI Act was introduced by the European Commission in April 2021. It is currently going through the EU legislative process, and is expected to become law by late 2023 or early 2024. The AI Act aims to protect citizens' fundamental rights and freedoms, by banning certain AI applications (such as emotional manipulation) and placing other so-called 'High-Risk' systems under a specific regulatory regime.

The AI Act defines AI as software that is developed with one or more of the following techniques and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with:

  • Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods, including deep learning;
  • Logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference, and deductive engines, (symbolic) reasoning and expert systems;
  • Statistical approaches, Bayesian estimation, search and optimisation methods.

AI is becoming increasingly common in medical diagnosis. Real-world examples for medical diagnosis include:

  • Assisting in formulating a diagnosis or recommending a treatment option;
  • Oncology and pathology use machine learning to recognise cancerous tissue; and
  • Analysing bodily fluids.

In the case of rare diseases, the joint use of facial recognition software and machine learning helps scan patient photos and identify phenotypes that correlate with rare genetic diseases.

Article 6 of the AI Act states that irrespective of whether an AI system is placed on the market or put into service independently, that AI system shall be considered high-risk where both of the following conditions are fulfilled:

  1. The AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II of the Act (which includes the MDR and the IVDR); and,
  2. The product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product under the Union harmonisation legislation listed in Annex II.

This means that medical and in-vitro diagnostic devices subject to the Regulations, which feature an AI system that is a safety component of the device, will be considered high-risk and subject to regulatory oversight pursuant to the AI Act. Therefore, if a device, such as a pacemaker, utilised an AI system to ensure it was operating properly (e.g. using machine learning to identify the user's normal cardiological parameters) and if this was considered a safety component, new regulatory requirements will be created.

Providers of High-Risk AI Systems

The main bulk of regulatory responsibility under the proposed AI Act seems to fall upon providers of High-Risk AI systems (such as those in medical devices). The AI Act defines a provider as someone who develops an AI system or has an AI system developed to place it on the market or put it into service under its name or trademark, whether for payment or free of charge.

An important point to note is that the AI Act has extra-territorial jurisdiction in that it applies to any person or entity wishing to put a High-Risk AI System on the EU market.

Providers of High-Risk AI systems will have specific responsibilities, including:

  • Implementing, documenting, and maintaining a risk management system (this must be a " continuous iterative process" throughout the system's entire lifecycle, requiring regular updating);
  • Carrying out training, validation, and testing of data sets subject to appropriate data governance and management practices. Special Category personal data may be processed subject to safeguards;
  • Drawing up technical documentation before the system is placed on the market/put into service. This must be kept up-to-date and be drawn up so that it shows the system is in compliance with the Act;
  • Ensuring the High-Risk systems have automated logging of events confirming to recognised standards;
  • Designing High-Risk systems so that natural persons can effectively oversee them;
  • Ensuring the system undergoes the relevant conformity assessment procedures;
  • Informing national competent authorities of issues and demonstrating conformity if requested.

Conclusion

The use of AI systems in medical and diagnostic devices is a primary example of how AI can be a paradigm-shifting source of good in modern society. However, much about the technology is still not understood due to the "black box" nature of convolutional neural networks used in deep learning. A balance must be reached between making the best possible technology available to patients while also protecting their fundamental rights and freedoms. Having the proper regulatory framework in place before the AI Act becomes law will ensure a seamless transition. We recommend that companies incorporating AI systems into their technology should consider how they plan to deal with the new regulatory requirements.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More