- with readers working within the Advertising & Public Relations and Telecomms industries
Key Takeaways:
- On Nov. 6, the Food and Drug Administration's (FDA) Digital Health Advisory Committee (DHAC) convened to discuss the promise and evolving challenges of Generative Artificial Intelligence (GenAI)-Enabled Digital Mental Health Medical Devices.
- The FDA acknowledged the challenges of applying traditional premarket and postmarket processes for static medical devices to rapidly evolving AI-enabled medical devices and sought guidance on how to balance innovation and access with real-world safety and performance.
- Of the 1,200+ AI-enabled medical devices authorized by the FDA, none are currently authorized for mental health conditions. There are few authorized digital mental health devices on the market, and most are non-AI and prescription-only adjuncts to clinical care.
- Although the FDA exercises enforcement discretion for low-risk software functions and software as a medical device (SaMD), innovators in the mental health and wellness space should remain vigilant because experts called for a distinction between over-the-counter/direct-to-consumer devices and prescription devices with clinical oversight as well as a clearer framework to distinguish medical devices from general wellness products (GWPs).
Last week, the FDA's DHAC convened to address the promise and risks of leveraging AI to alleviate the growing mental health crisis. Building on its first meeting (see here), this second meeting focused on four key issues: (1) clinical evidence considerations, (2) products designed to incorporate diagnostics with therapeutics, (3) generative AI-enabled technologies and patient safety in psychiatry, and (4) approaches to AI-delivered therapy.
The FDA recognized AI's potential to expand access to high-quality care, particularly in underserved and rural areas, but the agency is seeking to better understand the benefits, risks and mitigation strategies to ensure that digital mental health (DMH) diagnostics and therapeutics remain safe and effective. A key regulatory concern is the unique risks associated with "AI therapists" and provider-like chatbots. The FDA emphasized the need for tailored guidance distinguishing between provider-supervised (human-in-the-loop) systems and autonomous devices that diagnose or treat patients without clinician oversight. Consumers may not be able to differentiate between medical devices that require FDA approval and wellness apps not under the FDA's purview.
Digital Mental Health Medical Device
The FDA defines "digital mental health medical devices" as "digital products or functions (including those utilizing AI methods) that are intended to diagnose, cure, mitigate, treat or prevent a psychiatric condition, including uses that increase a patient's access to mental health professionals." This includes SaMD. The FDA provided several examples with varying degrees of risk on the spectrum and explained that it would not enforce device requirements under the Federal Food, Drug, and Cosmetic Act (FDCA) for software functions or apps that pose a low safety risk. For example, an app designed to help patients with anxiety maintain behavioral coping skills by providing a "Skill of the Day" exercise when the user feels anxious is low risk and not the focus of regulatory oversight.
Digital Mental Health Diagnostics
Digital mental health medical devices can be diagnostic, therapeutic or both. The FDA defines digital mental health diagnostics as "any digital mental health medical device that is intended to contribute to the assessment, evaluation, monitoring, or diagnosis of a patient, and is not limited to stand-alone diagnostic tests." Diagnostic devices collect or analyze physiological or behavioral data through various instruments, but in the context of AI-enabled mental health medical devices, the relevant data is behavioral data collected through conversational prompts and interactions with generative AI and large language models (LLMs), or clinical assessments performed by medical professionals. There are only a few examples of authorized DMH diagnostics – they are prescription devices that assist clinicians in diagnosing pediatric autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD).
Potential benefits the FDA will consider in its review include improved patient access to care, earlier detection and more timely assessment of symptoms, and expansion of datasets to provide clinicians with more specialized diagnostic support. Inaccurate information, user misinterpretation and overreliance that may result in failures like misdiagnosis, ineffective treatment and delayed crisis identification remain concerns of the FDA. Developers should ensure the device design aligns with the intended use and that real-world performance testing demonstrates the device functions as intended.
Digital Mental Health Therapeutics
The FDA defines DMH therapeutics as "any digital mental health medical device that is intended to contribute to or aid in the treatment of a psychiatric condition" and states that these "often include stand-alone and adjunctive therapy tools that are intended to provide therapeutic content in the course for treatment for psychiatric disorders." The FDA cited a handful of DMH therapeutics approved under 21 CFR §§ 882.5801 and 882.5803, explaining that the approved devices are not AI-enabled and most are adjuncts to clinical care. Premarket considerations include determining whether the device is intended to be adjunct or stand-alone, developing well-designed randomized control trials, establishing a time frame for demonstrating clinical benefit, articulating meaningful endpoints, and providing clear labeling that includes precautions, contraindications and an overview of performance. The FDA explained that its newly finalized guidance on predetermined change control plans (PCCPs) is critical for premarket submission of AI-enabled devices, and encouraged adopting a total product life cycle (TPLC) approach to building a robust risk mitigation framework, beginning with product design development.
Hypothetical Large Language Model Therapy for Major Depressive Disorder
The FDA presented the committee with a hypothetical prescription LLM device for stand-alone treatment of major depressive disorder in adults, seeking to better understand risk management throughout the TPLC by posing questions about premarket evidence, meaningful clinical endpoints, postmarket monitoring, labeling, prescription versus over-the-counter devices, and use in pediatric populations versus adults. Experts highlighted the promise of expanded access and scalability but noted risks such as missed symptoms, privacy breaches, bias and device use beyond intended use and validated settings ("scope creep").
The committee recommended that premarket randomized controlled trials use rigorous, well-controlled designs with longitudinal follow-up to capture sustained effects and adverse outcomes. For meaningful clinical endpoints, experts favored not only validated symptom scales (like PHQ-9 and GAD-7) but also functional outcomes to measure improvements in quality of life, potentially augmented by objective data from sensors or wearables. The committee called for clear escalation protocols, ongoing postmarket surveillance to monitor changes in AI models and real-time adverse event reporting, strong labeling, and special safeguards for pediatric patients, recommending a flexible, risk-based regulatory approach to ensure safety, effectiveness and equity as these technologies are adopted.
A common thread in discussions was that the line between consumer wellness apps and medical devices is increasingly blurry, and a new regulatory framework may help guide developers and inform consumers. While risk abounds both in DMH diagnostics and therapeutics, the advisory committee urged regulation of direct-to-consumer and over-the-counter devices, but seemed slightly more optimistic of autonomous devices designed to provide an actionable diagnosis or diagnostic support rather than effective treatment.
Comment Opportunity
The FDA is accepting comments until Dec. 8 at 11:59 p.m. EST.
Docket: FDA-2025-N-4203
Topics for comment include:
- Real-world evaluation methods and infrastructure
- Performance metrics and indicators
- Postmarket data sources and quality management
- Monitoring triggers and response protocols
- Human-AI interaction and user experience
- Additional considerations and best practices
Co-Authored by Tatyana Norman-Webler
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.