Beyond traditional regulatory concerns, the U.S. Food and Drug Administration (FDA) is currently navigating the challenges of usability, equity, performance bias, continuous learning, and accountability in response to an ever increasing number of AI/ML devices. The FDA's Center for Devices and Radiological Health's (CDRH) proactive action plan, unveiled in 2018, underscores the agency's commitment to responsible AI/ML device integration. The emphasis on transparency, patient-centered approaches, and collaborative regulatory science initiatives reflects the continuously evolving landscape.
The United States Food and Drug Administration (FDA) is reviewing an increasing number of applications for AI/ML devices, with the number receiving FDA marketing authorization nearing seven hundred as of October 2023. AI/ML devices have unique considerations during their development and use, including those for usability, equity of access, management of performance bias, the potential for continuous learning, and stakeholder (manufacturer, patient, caregiver, healthcare provider, etc.) accountability. These considerations impact not only the responsible development and use of AI/ML devices but also the regulation of such devices. FDA's Center for Devices and Radiological Health (CDRH) recognizes these unique considerations and released an action plan for AI/ML devices in January 20218. Among its numerous aims, this action plan highlights CDRH's commitment to promoting transparency of AI/ML devices by fostering a patient-centered approach, while also collaborating with stakeholders on regulatory science efforts.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.