ARTICLE
27 December 2024

FDA's Role In Regulating Artificial Intelligence

OG
Outside GC

Contributor

OGC is a unique law firm that offers the relationship and experience of a traditional law firm with the cost savings and speed of an ALSP. By combining top-notch legal talent and significant business acumen, we deliver the value and efficiency of an in-house lawyer, without adding to our client’s headcount or sacrificing quality.
As artificial intelligence (AI) technology continues to evolve and proliferate, regulators across the globe face the herculean task of developing pragmatic legal frameworks for managing the risks associated with its use.
United States Food, Drugs, Healthcare, Life Sciences

As artificial intelligence (AI) technology continues to evolve and proliferate, regulators across the globe face the herculean task of developing pragmatic legal frameworks for managing the risks associated with its use. In the U.S., the Food and Drug Administration (FDA) plays a crucial role in this regard by regulating a broad range of healthcare-related AI technologies in an effort to protect patient safety and ensure the efficacy of new innovations.

In fact, the FDA has been regulating AI software designed to diagnose, treat, or prevent medical conditions for years1. This technology, which is also known as Software as a Medical Device or "SaMD," includes diagnostic tools, such as AI algorithms used for analyzing medical images (like X-rays or MRIs) to detect diseases or abnormalities; therapeutic devices, such as robotic surgical systems or AI-driven infusion pumps; and clinical decision support, which uses AI to arm healthcare providers with information to assist in clinical decision-making.

In addition to SaMD, the FDA also oversees AI applications that have an indirect impact on patient care. These so-called "Indirect AI" tools typically support decision-making by providers, rather than making actual, direct medical decisions. For example, Indirect AI systems help to analyze data, improve patient matching in clinical trials, and optimize healthcare operations. They also are used in early-stage discovery and drug development2 (e.g., drug target identification, selection and prioritization, screening and designing compounds, and modeling pharmacokinetics and pharmacodynamics, among other uses).

Since Indirect AI tools do not provide direct patient care, they tend to present a lower risk to patients than SaMD tools. Nonetheless, the FDA still maintains an oversight role given the sizable impact that Indirect AI applications can have on healthcare outcomes.

FDA's Approach to Indirect AI Oversight

Companies operating in this space can benefit from understanding the general approach behind the FDA's oversight of Indirect AI, which is modeled after its risk-based approach to regulatory oversight and includes the following key elements:

  1. Risk Classification Based on Functionality
    Unlike SaMD, which is assessed based on direct patient risk, Indirect AI is assessed based on the potential for having a direct impact on healthcare outcomes. For instance, an AI tool designed to help optimize clinical trials may be evaluated based on how or to what extent its data analytics contribute to drug development timelines, rather than as a diagnostic tool.
  2. Emphasis on Real-World Evidence (RWE) and Data Quality
    The FDA encourages companies to use real-world evidence and ensure data quality standards for Indirect AI applications that generate insights. These tools often rely on massive data sets from clinical and operational sources, requiring consistency, accuracy, and transparency in data use. Indirect AI applications supporting clinical decisions must be transparent about data sources, limitations, and potential biases.
  3. Guidelines for Transparency and Validation
    Transparency in AI is crucial, as it ensures clinicians and researchers understand how insights are being generated. The FDA collaborates with technology developers to establish guidelines for documenting AI processes, assumptions, and limitations. Validating AI's accuracy and efficacy with real-world data allows the FDA to review the tool's impact without requiring the same level of intervention as patient-facing AI.
  4. Collaborative Oversight and Industry Partnerships
    Recognizing the fast-paced nature of AI, the FDA partners with industry stakeholders through initiatives like the Digital Health Innovation Action Plan3which enable the FDA to stay up-to-date on technological advances and offer feedback and guidance, while allowing flexibility in its regulatory requirements.

Regulating AI in the Future

Like other AI regulations, the FDA's oversight of AI is still evolving as healthcare AI technology continues to advance. Some of the agency's ongoing initiatives include:

  1. Developing Standardized Metrics for AI Performance
    As more healthcare organizations adopt AI, the FDA is working to establish metrics that evaluate AI performance in supportive roles. These metrics will help ensure consistency across tools that impact drug research, clinical trials, and operational efficiency.
  2. Encouraging Ethical and Transparent Data Use
    Indirect AI tools rely on vast datasets that often include sensitive patient information. The FDA encourages ethical data practices and secure data handling, ensuring that Indirect AI protects patient privacy while generating meaningful insights.
  3. Enhancing Collaborative Frameworks
    By expanding collaborations with industry and academic stakeholders, the FDA fosters an environment where companies can innovate with clear guidelines on safety and efficacy, even for Indirect AI.

Conclusion

The FDA's regulation of Indirect AI represents a balanced approach to innovation and risk management. By focusing on real-world evidence, transparency, and collaborative partnerships, the FDA seeks to ensure that Indirect AI supports healthcare advancements responsibly and safely, while empowering researchers, clinicians, and healthcare organizations to leverage the latest technologies to improve patient outcomes.

Footnotes

1. For an overview, see "Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together," published by the FDA on March 15, 2024.

2. See the FDA's Discussion Paper "Using artificial intelligence and machine learning in the development of drug and biological products".

3. Digital-Health-Innovation-Action-Plan

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More