This installation of Ropes & Gray's podcast series Non-binding Guidance focuses on FDA regulatory developments in the area of artificial intelligence ("AI") and machine learning. AI and machine learning represent a rapidly growing frontier in digital health, with applications ranging from medical device software used for diagnostic and triaging applications to drug candidate selection to clinical trial design and interpretation. In this episode, FDA regulatory attorneys Kellie Combs, Greg Levine, and Sarah Blankstein explore the development and uptake of these technologies in response to the ongoing COVID-19 pandemic, FDA's current regulatory landscape for these technologies, recent steps FDA has taken to update its regulatory approach to these tools in coordination with industry, and continuing trends in this area they expect to see through 2021.
Transcript:
Kellie Combs: Hi, I'm Kellie Combs, a partner in the life sciences and regulatory compliance practice group at Ropes & Gray and co-lead of Ropes & Gray's Digital Health Initiative. Welcome to Non-binding Guidance, a podcast series from Ropes & Gray focused on current trends in FDA regulatory law, as well as other important developments affecting the life sciences industry. I'm here today with my colleague Greg Levine, a partner in the life sciences regulatory and compliance practice, also based in Washington, D.C. with me, and Sarah Blankstein, a senior associate in our life sciences regulatory and compliance practice who's based in our Boston office. On today's podcast, we'll discuss FDA regulatory developments in the area of artificial intelligence and machine learning.
Artificial intelligence, or AI, has been broadly defined as the science and engineering of making intelligent machines. And machine learning, or ML, is an artificial intelligence technique that can be used to design and train software algorithms to learn from and act on data. Software developers can use ML to create an algorithm that is locked, so that its function does not change, or adaptive, so its behavior can change over time based on new data. AI and ML can derive insights and recognize patterns and vast amounts of data much faster than humans can, and without the same risk of human error. AI and machine learning represent a rapidly growing frontier in digital health, with applications ranging from medical device software used for diagnostic and triaging applications, to drug candidate selection, to clinical trial design and interpretation. AI and ML applications have been on the rise now for several years, but the development and uptake of the technology has accelerated during the pandemic, with industry, health care providers, regulators, and investors turning to AI and ML tools to aid in drug and vaccine research, patient screening, triage, and monitoring. As more and more AI and machine learning tools continue to be developed, it has become clear that FDA's current regulatory framework is not well-equipped to address some of the unique challenges that these technologies pose. FDA has recently taken some steps to begin clarifying its regulatory approach to these tools in coordination with industry and anticipates continuing work in this area throughout 2021.
Greg, let's start with you. Can you speak to what FDA has done to address AI and machine learning already?
Greg Levine: Thanks, Kellie. Well, as you noted, FDA has recognized that AI and machine learning technologies pose a number of challenges from a regulatory perspective. And a key challenge here is that when FDA regulates software as a medical device, there's a general question about how to determine when changes to a software algorithm are so significant that they merit reevaluation of the software product, its safety and effectiveness. And that problem is compounded, really is heightened, when you have these kinds of "continuous learning" capabilities of many AI or machine learning tools, where, for example, a diagnostic algorithm can continue to learn and even undergo updates and improvements as the software reviews more and more data. The data might be from diagnostic images or other data sets, and the current regulatory framework that FDA has is not built to address what frankly seems like science fiction-medical devices that think for themselves, that train themselves to establish the initial algorithm, and then eventually may even adapt over time as they learn more and more. And at some point, addressing this challenge with machine learning technologies may require legislative action to give FDA more flexibility on when a new clearance or approval would be needed, because obviously this kind of thing was not anticipated in the years when FDA was established, when the statute established the parameters for FDA regulation of medical devices and as FDA has developed those policies over time.
But FDA already is working to develop policies to encourage AI and ML technologies and to clarify its regulatory approach within its current legal authorities, and we'll talk about that in a little bit. In the meantime, though, FDA has already authorized a number of AI products as medical devices in recent years, even without a specialized framework for review. One recent publication counted 64 AI- or ML-based devices that had been authorized by FDA, mostly through the 510(k) pathway, a smaller number through De Novo, and one through a PMA. These devices to date have been "locked" algorithms, they don't continuously change or adapt as they learn in the real world-they are continuing to collect data, which may be used for updates to those products.
To give just a few recent examples, in November, Hologic received 510(k) clearance for a deep-learning AI product called the Genius AI Detection device, which aids in the identification and early detection of breast cancer by highlighting areas with subtle potential cancers that can be difficult to detect, and then those would be highlighted and further examined by a radiologist. The algorithm was trained on a database of Hologic tomography images, and to support the clearance, Hologic performed two types of performance tests: a reader study to verify the performance of radiologists in interpreting the image sets when using the Genius AI algorithm, and a standalone study comparing the algorithm's performance on a retrospective set of high-resolution and standard-resolution images.
Another company, Nines, recently received 510(k) clearance for an algorithm called NinesAI that allows radiologists to triage intracranial hemorrhage and mass effect on CT scans. Mass effect is associated with conditions such as stroke or brain tumor. That algorithm was trained using a database of radiological images and the performance was verified and validated in two retrospective performance studies.
And the last example I'll give for the moment, in February of last year, FDA authorized a machine learning algorithm via the De Novo pathway-this was Caption Health Inc.'s AI/ML-based cardiac ultrasound software. In that case, there was an interesting feature, which was a Predetermined Change Control Plan. Under that Predetermined Change Control Plan, Caption Health provided a protocol to mitigate the risk of algorithm changes leading to changes in the device's technical specifications or negatively affecting clinical functionality or performance specifications directly associated with the intended use of the device. But essentially, as long as the requirements of the Predetermined Change Control Plan were met, as that software learned more, as the algorithm learned more, the company could implement changes to the algorithm without having to go back to FDA for pre-market review. So what we're seeing here really is FDA at the moment approaching AI- and ML-based algorithms as they are submitted to the Agency on a case-by-case basis.
Sarah, can you talk about the steps FDA is taking to address AI- and ML-based devices in a more comprehensive way?
Sarah Blankstein: Thanks, Greg-happy to talk about that. So, as you mentioned, FDA is already working to develop some policies to encourage AI and ML technologies and clarify its regulatory approach. And in particular, in 2019, FDA released a discussion paper with a Proposed Framework for its regulation of AI and machine learning software as a medical device (or "SaMD"). The 2019 discussion paper drew on practices from existing pre-market programs in the U.S., as well as risk categorization principles from the International Medical Device Regulators Forum (or "IMDRF"), other risk-focused FDA documents on benefit-risk and software modification, and the Total Product Life Cycle regulatory approach outlined in FDA's Digital Health Software Precertification Program (or the "Pre-Cert Program"). And importantly, the Proposed Framework was limited in scope to commercial SaMD, as opposed to SaMD or other AI and ML tools used for research purposes, or software included in medical device hardware, and the framework focuses only on modifications actually to pre-existing SaMD rather than original clearances and approvals of these products. Nevertheless, the Proposed Framework did do a lot to clarify FDA's current thinking on regulatory approaches to AI and machine learning technologies. And, in general, it was well-received by industry and other stakeholders who responded to the discussion paper in large numbers with further suggestions and areas for the Agency to clarify.
Some key aspects of FDA's Proposed Framework that I wanted to highlight are:
- First, the identification of the need for Good Machine Learning Practices (or "GMLP"). And the idea is that manufacturers of AI- and ML-based SaMD would be expected to establish cultures of quality and demonstrate compliance with still-to-be-clarified GMLPs.
- In addition, the Proposed Framework puts forward the idea of marketing authorization submissions, including a "Predetermined Change Control Plan"-and that's something that Greg had mentioned Caption Health had submitted with its De Novo application-the Predetermined Change Control Plan details information about both the types of anticipated modifications to the software in the "SaMD Pre-Specification" (or "SPS"), and also the methodology underlying algorithm changes in order to ensure that the device remains safe and effective after modification, and that part is submitted in what's called the "Algorithm Change Protocol" (or "ACP").
- Another aspect of the Proposed Framework is that the framework recognizes that the types of modifications being made to an AI- or ML-based device will dictate whether a new 510(k) is required. Certain types of modifications would require new pre-market submissions, while others merely require documentation in the change history or other appropriate documents.
- A fourth key topic highlighted in the Proposed Framework is the importance of transparency, and specifically, transparency about the functions and modifications to AI- and ML-based devices to promote trust in these technologies by users.
- And finally, in the post-marketing realm, the Proposed Framework also encourages manufacturers to leverage real-world performance data and monitoring to understand how their products are being used in the real world, and enable them to respond more proactively to safety and risk issues that may emerge.
So Kellie, can you speak to some of the steps FDA has taken since the release of the 2019 Proposed Framework to better address AI and ML technologies?
Kellie Combs: Sure. Thanks, Sarah. Well, as you mentioned, there was certainly a lot of feedback received from stakeholders on that Proposed Framework issued in 2019-over 130 comments submitted to the public docket, and also lots of commentary included in published articles that FDA had reviewed. Additionally, the Agency has participated in a number of efforts led by standards organizations and other groups to develop GMLP standards, FDA hosted a public workshop on the role of AI in radiological imaging, and held a Patient Engagement Advisory Committee to gain insights from patients on the factors that may impact their trust in AI and ML technologies. So the Agency has taken all these learnings, and just last month, released an Action Plan for addressing the regulation of AI and machine learning tools. The plan outlines five actions that the Agency believes will advance the effort toward practical oversight of AI- and ML-based SaMD. Now, we obviously can't discuss all of them here, but here are some of the highlights:
- First, FDA plans to update the 2019 Proposed Framework, including through issuance of draft guidance on the Predetermined Change Control Plan, and the Agency is planning to issue this guidance in 2021. The guidance is expected to leverage comments that FDA received on the Proposed Framework, as well as FDA's practical experience reviewing these submissions, such as the Caption Health machine learning algorithm that Greg mentioned.
- Second, FDA will encourage harmonization of GMLP development. As was noted earlier, FDA has already been engaging with organizations around the world working on development of GMLPs, and the Agency is committed to deepening its harmonization efforts with these organizations and continuing to leverage this work to adopt clear and harmonized GMLP standards.
- Third, the Agency plans to promote user transparency and a patient-centered regulatory approach, holding a public workshop on how device labeling can support transparency to users about how algorithms work and their limitations. And ultimately, the hope here is that greater transparency will enhance public trust in these sorts of technologies.
- Fourth, FDA will support regulatory science efforts to develop methodology for the evaluation and improvement of machine learning algorithms. Notable here is an effort to identify and eliminate bias, introduced by the reliance off historical data sets to train and validate algorithms, which may introduce biases present in our current health care system with respect to race, ethnicity and socioeconomic status into AI- and ML-based algorithms.
- And fifth and finally, FDA will work with stakeholders who are piloting a real-world performance process for AI- and ML-based SaMD as part of a Total Product Life Cycle approach to these technologies. The idea here is that real-world data can help manufacturers understand how their products are being used, identify opportunities for improvement, and also respond proactively to safety or usability concerns. Importantly, these efforts around real-world data use will be undertaken in coordination with other ongoing FDA initiatives that are focused on promoting the use of real-world evidence in product development and post-market evaluation, some of which we discussed in previous podcasts.
Now, while this FDA Action Plan in conjunction with the 2019 Proposed Framework are helpful in moving this discussion forward, they're still very preliminary efforts, so that means there's a lot of work that remains to be done. Greg, what are some of the important questions that remain unanswered?
Greg Levine: Thanks, Kellie. Well, as you and Sarah described, the Proposed Framework and the Action Plan are preliminary-they're proposals and they're plans-and there's still a lot of work to be done for FDA to turn these concepts and frameworks into specific policies and actions, and actually implement them. But beyond that, there are a number of issues that are not addressed by the Proposed Framework or the Action Plan, which FDA and stakeholders also need to grapple with. For example, as Sarah mentioned, the Proposed Framework is limited to outlining the potential regulation of devices for commercial distribution-so it's the pre-market review, and it does not discuss the use of AI- or ML-based devices in research, including in drug discovery, which is becoming increasingly common. As FDA has done with other digital health technologies, such as remote patient monitoring, the Agency may ultimately have to address the unique considerations that arise in the research context when relying on AI- or ML-based technology. And in addition, there may be some interesting questions to deal with, with respect to intended use of these products. The 2019 Proposed Framework identified three broad types of modifications to AI- or ML-based SaMD, each of which may raise different regulatory issues, so these were:
- Number one, modifications relating to the performance of the product, such as increased sensitivity of a diagnostic test based on further algorithm training;
- Secondly, modifications to inputs, such as compatibility with scanners from additional manufacturers, or input devices from additional manufacturers; and
- Thirdly, modifications to intended use-as an example, that might be changing from providing an informative number, like a "confidence score" to providing some kind of definitive diagnosis.
And neither the Proposed Framework nor the Action Plan address head-on the complexity of these issues, and in particular, the issues on the concept of "intended use." In its recently proposed rule redefining "intended use," FDA clarifies that a product's "characteristics and design," including its technical features, may be used as evidence of intended use, so this is interesting how it might apply here in the AI/ML context. FDA and industry are going to need to address the complex questions about how changes to a product algorithm that result from continuous learning functions of an AI- or ML-based software medical device product could inherently affect the device's intended uses on an ongoing basis. So something that's not really been explored yet, but we can see on the horizon that it could be forthcoming.
So certainly there's a lot more work to be done, but in the meantime, from the perspective of industry, which is continuing to develop these products at a rapid pace, there will need to be case-by-case discussions with the Agency in determining how these products will get to market as a regulatory matter, and then how to handle modifications to those products over time.
Kellie Combs: Great. Thanks, Greg. This is certainly an area to watch, and we here at Ropes & Gray will continue to monitor these developments. Unfortunately, I think we're out of time for today. Thanks very much everyone for tuning in to our podcast, Non-binding Guidance, which is brought to you by our attorneys in the life sciences regulatory and compliance practice at Ropes & Gray. For more information about our practice or other topics of interest to life sciences companies, please visit our FDA regulatory and life sciences practice pages at www.ropesgray.com. You can also subscribe to Non-binding Guidance and other RopesTalk podcasts at Ropes & Gray's podcast newsroom on our website, or by searching for Ropes & Gray podcasts in iTunes, Google Play or Spotify. Thanks again for listening.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.