ARTICLE
6 November 2024

How The EU AI Act Could Affect Medtech Innovation

GP
Goodwin Procter LLP

Contributor

At Goodwin, we partner with our clients to practice law with integrity, ingenuity, agility, and ambition. Our 1,600 lawyers across the United States, Europe, and Asia excel at complex transactions, high-stakes litigation and world-class advisory services in the technology, life sciences, real estate, private equity, and financial industries. Our unique combination of deep experience serving both the innovators and investors in a rapidly changing, technology-driven economy sets us apart.
The EU AI Act, which went into force on August 1, 2024, introduces specific rules for artificial intelligence (AI) systems, especially those deemed "high risk."
European Union Food, Drugs, Healthcare, Life Sciences

AI-driven medical devices could be subject to strict standards for transparency, risk management, and human oversight.

The EU AI Act, which went into force on August 1, 2024, introduces specific rules for artificial intelligence (AI) systems, especially those deemed "high risk."

Medtech devices, products, and services — such as diagnostic tools, surgical robotics, and personalized treatment plans — often rely on AI components that could be subject to the act, especially if they could pose risks to health, safety, or fundamental rights.

The act recognizes that sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate, particularly in the healthcare sector. For this reason, even AI systems that are incorporated into medical devices as a safety component will result in the device being treated as high risk and given the highest scrutiny and regulatory burden.

Moreover, the act applies not only to those who develop or provide AI systems but also to deployers of AI systems, such as medical professionals and healthcare institutions. These groups must ensure that AI systems are used in compliance with the act's requirements, maintain clear records of system performance, and monitor for adverse effects. Medical institutions have to ensure that their AI-driven devices are used in accordance with prescribed safety protocols, and healthcare professionals are required to validate AI-generated decisions through human oversight. This includes documenting clinical decisions and intervening when an AI system's recommendations do not align with patient health needs.

Importantly, the act may apply to technologies developed, owned, or used by companies that are not located in the European Union if the technology will be put into use in the EU market or the output of the technology is intended for the EU.

For details on the act in general, read our pieces "The World's First AI Regulation Is Here" and "How to Determine Your Risk Category and What It Means to Be 'High-Risk.'"

Below we highlight some aspects of the act that are particularly important for medtech companies.

  • High-risk classification. AI systems integrated into medical devices as a safety component, particularly those used for diagnosis, monitoring, and treatment, are likely to be classified as high risk. This classification comes with stringent requirements regarding safety, transparency, and risk management. Our interactive tool can help companies determine if their AI systems are high risk. Any AI system that is classified as high risk must undergo a thorough evaluation to ensure it meets the necessary standards before it can obtain the EU CE mark.
  • Conformity assessments. AI-based medtech products classified as high risk must undergo third-party conformity assessments to ensure compliance with both the AI Act and other relevant EU harmonization legislation, such as the Medical Devices Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR). The act seeks to apply a coordinated and complementary approach to the various legislation that applies to medtech products. For example, the act allows for a single conformity assessment under the EU AI Act and the MDR or IVDR. In practice, however, this may be more complicated because any notified body needs to be regulated under both regimes. This is a developing space that will require monitoring by medtech companies.
  • Transparency. AI systems in medtech will be subject to mandatory requirements for transparency. When the AI system is high risk, it includes clear documentation about how the AI system makes decisions. Providers must ensure the system's outputs are explainable to both medical professionals and patients.
  • Risk management. Medtech companies using high-risk AI systems will need to establish rigorous risk-management systems to identify and mitigate risks associated with the use of AI in healthcare. This includes monitoring the system post-deployment to prevent or minimize harm.
  • Human oversight. High-risk AI systems used in healthcare must have mechanisms allowing for human oversight to ensure that healthcare professionals can audit and adjust clinical decisions, especially those affecting patient health.
  • Logs, accuracy, and cybersecurity. High-risk AI systems will need to automatically generate recordings of events over the system's lifetime, achieve an appropriate level of accuracy and robustness to errors or faults for their role, and meet cybersecurity requirements.
  • Deployment obligations. The act also places responsibility on deployers of AI systems (e.g., hospitals and clinicians). This would include ensuring the AI system is used in accordance with instructions, maintaining oversight by trained individuals and ensuring monitoring and surveillance. This will invariably affect the relationships throughout the AI contractual chain, which would include the AI system provider, distributors, resellers, medical device providers, and healthcare providers.

All medtech companies using AI systems will need to ensure an appropriate level of AI literacy within the business. The supervisory authorities are expected to develop guidelines relating to the act's obligations that integrate and embed EU fundamental rights. These guidelines will likely set governance standards that medtech companies should consider, irrespective of the application of the act.

The act will be implemented in phases, with deadlines for compliance linked to risk categories. Most obligations will not apply until August 2, 2026, with existing technologies given a grace period until they experience significant design changes. Refer to our "EU AI Act Implementation Timeline" for a list of important dates.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Find out more and explore further thought leadership around Food, Drugs, Healthcare, Life Sciences

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More