In January 2024, the U.S. Government Accountability Office (GAO) issued a report highlighting current obstacles to the U.S. Food and Drug Administration's (FDA) timely and effective regulation of artificial intelligence (AI) and machine learning (ML) in medical devices and other emerging health care technologies (GAO Report). The GAO Report roughly coincided with remarks made by FDA Commissioner Robert Califf and increased media attention on the various challenges faced by federal agencies, such as FDA, in regulating AI/ML. As both Congress and the Executive Branch are mulling strategies to ensure safe, effective, equitable and efficient development and marketing of AI/ML solutions and tools, it is critical that each branch not only understand the limitations on FDA's authority, but also further empower and enable the FDA to better regulate such technologies that, by their very design, will change over time.

Defining AI/ML and the Regulatory Need

The FDA defines AI as "a branch of computer science, statistics, and engineering that uses algorithms or models to perform tasks and exhibit behaviors such as learning, making decisions, and making predictions," with ML considered "a subset of AI that allows models to be developed by training algorithms through analysis of data, without models being explicitly programmed." Rapid advancements in AI/ML have led to the development of large language models (LLMs) and generative AI, such as GPT-4 and Med‑PaLM 2, which have numerous potential applications across the health care industry, including, but not limited to, enhancing clinical documentation, generating discharge summaries, automating insurance pre-authorization, interpreting scans, suggesting treatment options, triaging patients, analyzing laboratory tests, interpreting provider notes, making personalized health recommendations, providing clinical decision support, protecting health data and generating responses via chatbots.

While embracing AI/ML has the potential to transform the health care industry by improving care quality, reducing cost and eliminating inefficiencies, developments in AI/ML present a multitude of risks and ethical challenges for health care stakeholders. For example, an AI/ML tool can hallucinate (i.e., make up information), incorporate biases that are reflected in the data on which it is trained, pose risks to data privacy and intellectual property rights, reduce transparency and human involvement in medical decision‑making, be used to improperly deny care and or otherwise decrease in accuracy and efficacy over time.

Given the unprecedented pace of evolution in AI/ML technology and the growing lists of potential applications and risks, regulators must provide, in a coordinated manner, guideposts for the development and use of AI/ML in health care. Such regulation should appropriately balance the need for safety, transparency and consumer protection with the realities of innovation (especially funding and staffing challenges faced by startups) to foster an environment in which AI/ML development and use can responsibly thrive.

AI/ML in Health Care Has Been and Remains at the Forefront of the Regulatory Agenda

Over the past five years, federal regulation of AI/ML has increased in frequency and complexity, especially in the health care sector.

With the issuance of its 2019 discussion paper, titled "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)" (Proposed AI/ML Framework), and its byproduct, the 2021 "Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan" (AI/ML Action Plan), the FDA has been at the forefront of the Executive Branch's efforts to develop a framework for AI/ML regulation. In the Proposed AI/ML Framework, the FDA committed to the following general regulatory principles that "balance the benefits and risks, and provide access to safe and effective AI/ML-based SaMD:

  1. Establish clear expectations on quality systems and good ML practices (GMLP);
  2. Conduct premarket review for those SaMD that require premarket submission to demonstrate reasonable assurance of safety and effectiveness and establish clear expectations for manufacturers of AI/ML-based SaMD to continually manage patient risks throughout the lifecycle;
  3. Expect manufacturers to monitor the AI/ML device and incorporate a risk management approach and other approaches outlined in "Deciding When to Submit a 510(k) for a Software Change to an Existing Device" Guidance in development, validation and execution of the algorithm changes (SaMD Pre-Specifications and Algorithm Change Protocol); and
  4. Enable increased transparency to users and FDA using post-market real-world performance reporting for maintaining continued assurance of safety and effectiveness."

In September 2022, the FDA issued a final guidance document regarding Clinical Decision Support (CDS) Software (CDS Guidance). The FDA describes CDS as "a software function that provides health care professionals and patients with knowledge and person-specific information, intelligently filtered or presented at appropriate times, to enhance health and health care." Under Federal Food, Drug, and Cosmetic Act (FD&C Act) section 520(o)(1)(E), added by the 21st Century Cures Act, most CDS software functions are carved out of the statutory definition of "device"; in the CDS Guidance, the FDA further describes the distinction between CDS software functions outside the FDA's regulatory authority and "devices" subject to FDA oversight.

Most notably for AI/ML regulation, following the White House's release of its Blueprint for an AI Bill of Rights on October 4, 2022, FDA issued its first-ever AI/ML device draft guidance in April 2023, titled "Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions" (PCCP Draft Guidance). The FDA notes that the goal of the PCCP Draft Guidance is to "provide a forward-thinking approach" to the development of "machine learning-enabled device software functions or ML-DSFs", which incorporate software that learns through data, including real-world data, to perform tasks without the need to be explicitly programmed. FDA first described "Predetermined Change Control Plans" (PCCPs) in its 2019 Proposed AI/ML Framework to encourage the "iterative development of ML-DSFs" and in the PCCP Draft Guidance further provides recommendations on the content for such PCCPs. Congress later provided explicit authority for the FDA to approve or clear such PCCPs in the Food and Drug Omnibus Reform Act of 2022, which added to the FD&C Act a new Section 515C. This approach provides developers and manufacturers with useful foresight into how the regulatory process will impact development and functionality of devices. According to the FDA's database of AI/ML-Enabled Medical Devices, as of October 19, 2023, over 700 such devices with embedded AI/ML have been authorized.

Since last October, other Executive Branch agencies have built on the FDA's foundation by issuing several significant regulatory actions targeting AI/ML. First, President Biden's "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" (discussed here) describes a coordinated approach across the federal government to promote safety, privacy, equity, innovation, development and use of AI/ML, especially in health care. Then HHS, through its Office of the National Coordinator for Health Information Technology (ONC), issued a final rule on "Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing" (discussed here), in which ONC codifies, among other policies, a regulatory framework governing incorporation of AI in electronic health record systems. This final rule would require software developers to provide customers with more data so that providers can determine whether AI is "fair, appropriate, valid, effective and safe." However, the final rule leaves most applications of AI unregulated, including those that do not fall under the purview of federal data privacy and security laws. Most recently, the Centers for Medicare and Medicaid Services (CMS) issued an FAQ subregulatory guidance document addressing the use of AI in making coverage determinations (discussed here).

Industry stakeholders and politicians alike have criticized the White House for engaging in a scattershot, uncoordinated approach to rulemaking. The fear is that, due to overly broad rules and guidance, more health care stakeholders than intended will have to navigate a tangled web of regulation to determine how to compliantly develop, market, evaluate and/or use AI/ML tools and devices containing them—a resource-intensive process that may hinder or delay technology development and roll-out and would disadvantage startups, independent practitioners and underserved communities. Furthermore, some industry stakeholders have expressed confusion over the scope of the FDA's authority to regulate AI in certain contexts. Given the nascent and fragmented state of federal AI/ML regulation, state legislators, regulators and medical boards are beginning to step into the fray with state-level policy which will only increase the regulatory complexity that manufacturers and providers will need to navigate.

The FDA Commissioner's Remarks and the GAO Report Each Spotlight Obstacles to the FDA's Effective Regulation of AI/ML

FDA Commissioner Califf has recently made various remarks that aligned with key takeaways from the GAO Report regarding the FDA's authority and capacity to effectively regulate AI/ML. First, Commissioner Califf stated at a conference that with AI/ML, "[t]he algorithm's not only living, but the assessment of the algorithm needs to be continuous", which he noted would require the FDA to double in size to keep pace and gain new legal authority to conduct post-market monitoring and evaluation of approved devices outside of instances of device recalls or adverse event reports. Commissioner Califf then quipped, "the taxpayer is not very interested in doing that," and, alluding to the ongoing federal budget crisis, "[i]t is very hard to get Congress to give capital to make the changes that I think would be revolutionary."

The January 2024 GAO Report reiterated points made by Commissioner Califf regarding needed enhancements to FDA's authorities to oversee AI/ML-enabled devices. GAO notes: "According to FDA, legislation passed to date has made valuable improvements for FDA's oversight of AI/ML-enabled medical devices, but it does not address all potential regulatory challenges the agency faces with these emerging technologies." GAO urges FDA to clearly identify, document and communicate "specific legislative changes that would help it address these challenges" to Congress and warns that failure to do so may result in FDA falling short of "its priority of developing new regulatory techniques to evaluate these innovative emerging technologies."

A Path Forward for Empowering the FDA's Oversight of AI/ML-Enabled Medical Devices

Despite Congressional gridlock, Capitol Hill has been keenly focused on the emergence of AI/ML technologies and implications for health care. Among other areas, improving regulation of AI/ML was identified as an action item in a recent report issued by Senator Bill Cassidy, the ranking member of the Senate Committee on Health, Education, Labor, and Pensions. Additionally, a newly-formed Congressional Bipartisan AI Task Force (AI Task Force), the structure and purpose of which resembles that of a bipartisan, bicameral commission proposed in 2023 to be established by the introduced (but not enacted) National AI Commission Act , is likely to examine these and other initiatives aimed at better understanding and regulating AI/ML.

Given the uncertain fate of AI/ML legislation, health care stakeholders have proposed, and Commissioner Califf has endorsed, a novel and controversial approach to ensuring safety and effectiveness of AI/ML-enabled medical devices through use of public-private assurance laboratory partnerships. Under a recent proposal, a small number of federally-approved assurance labs—testing grounds to validate and monitor AI/ML as used in medical devices—would be housed at entities, such as major academic institutions or health systems, with requisite resources and expertise to oversee such technologies. While there is some support for the proposal in Congress, implementing the assurance lab model, either as a stop‑gap regulatory measure or a long-term approach for AI/ML regulation, faces an uphill battle. Critics of the proposal note that AI/ML tested at a major university may perform differently and have unforeseen implications when deployed in another setting, such as at a small rural hospital. Critics also point to potential conflicts of interest, such as instances where potential candidates for an assurance lab are also developing their own AI/ML tools or collaborating with private firms on the same.

Expect to see such topics garner more media and regulatory attention over the coming months as the AI Task Force develops its comprehensive report alongside Congressional committees setting forth bipartisan policy proposals to bolster the federal government's ability and capacity to effectively regulate AI/ML.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.