ARTICLE
12 November 2025

How CGL Policies May Respond To Novel AI Psychosis Claims

WR
Wiley Rein

Contributor

Wiley is a preeminent law firm wired into Washington. We advise Fortune 500 corporations, trade associations, and individuals in all industries on legal matters converging at the intersection of government, business, and technological innovation. Our attorneys and public policy advisors are respected and have nuanced insights into the mindsets of agencies, regulators, and lawmakers. We are the best-kept secret in DC for many of the most innovative and transformational companies, business groups, and nonprofit organizations. From autonomous vehicles to blockchain technologies, we combine our focused industry knowledge and unmatched understanding of Washington to anticipate challenges, craft policies, and formulate solutions for emerging innovators and industries.
The rapid advancement of generative artificial intelligence has brought conversational AI into the daily lives of hundreds of millions of users.
United States Insurance
Kenneth E. Ryan’s articles from Wiley Rein are most popular:
  • within Insurance topic(s)
  • in Asia
  • in Asia
  • with readers working within the Environment & Waste Management and Insurance industries

This article was originally published by Law360 and is available here and as a PDF here.

The rapid advancement of generative artificial intelligence has brought conversational AI into the daily lives of hundreds of millions of users.

While many interact through mainstream platforms such as OpenAI's ChatGPT, Google's Gemini or X's Grok, a growing number engage with AI through third-party applications — often referred to as AI wrappers — that build user-facing experiences atop large language models.

Notably, several of these AI wrappers have already been deployed in sensitive domains such as mental health. These emerging tools and established chat interfaces have led to what some psychologists are calling "AI psychosis."1

The term "AI psychosis" refers to the phenomenon where a user experiences a mental or emotional break from reality, such as paranoia or delusions, allegedly due to prolonged and intimate interaction with an AI model.2 Though still an emerging theory, three main types of AI psychosis have been described:

  • Messianic missions: This subtype involves individuals who believe they have uncovered profound truths about the world or have been chosen for a special mission. These beliefs often stem from conversations with AI that mirror or validate the user's thoughts, reinforcing a sense of exceptionalism or divine purpose.3
  • God-like AI: In this subtype, users attribute divine qualities, omniscienceor sentience to AI systems, believing them to be deities or higher beings.4
  • Romanticor attachment-based delusions: This subtype involves users developing romantic or emotional attachments to AI chatbots, believing the AI reciprocates their feelings.5

Cases involving reported AI psychosis occurrences will undoubtedly generate insurance claims and coverage disputes across the full spectrum of insurance programs. This article briefly touches upon key issues likely to arise in the context of commercial general liability policies.

AI Psychosis Cases

AI psychosis allegations have been at the center of several lawsuits involving physical and mental injuries.

This story mentions suicide. If you are experiencing thoughts of suicide, the Suicide and Crisis Lifeline is available 24 hours a day at 988 or online at 988lifeline.org.

One of the most significant cases,Megan Garcia v. Character Technologies Inc., involves a Florida mother who filed a lawsuit last year against Character.AI and Google, alleging that a chatbot contributed to her 14-year-old son's suicide.6 According to the complaint, the boy developed a pathological relationship with a chatbot modeled after a "Game of Thrones" character, which allegedly engaged him in emotionally manipulative and sexually explicit conversations.

The lawsuit claims that this interaction led to severe psychological deterioration and ultimately his death. In May, the U.S. District Court for the Middle District of Floridaruled that the case could proceed, rejecting the defendants' motion to dismiss and signaling that AI developers could be held accountable for the mental health consequences of their platforms.7

In another example, a middle-aged tech executive in Connecticut experiencing paranoia claims to have turned to an AI chat platform to share and explore his concerns about a surveillance campaign he felt was being carried out against him by numerous parties.

He allegedly engaged the platform's memory feature, which allowed the program to retain information from prior conversations and become more engaged in his theories. This purportedly caused the program to provide affirmative responses to the man's paranoia, and the man ultimately murdered his mother and then committed suicide.8

In other instances, AI chatbots allegedly have coached underage users on how to hide evidence of self-harm, purportedly persuaded a woman with severe mental illness to stop taking her medication, and potentially led users to believe they are the "chosen one" or that they are living in a simulated false reality.9 In one instance, a chatbot allegedly convinced a user that he was a real-life superhero, resulting in a complete break from reality and requiring medical intervention.10

Dr. Keith Sakata, a California psychiatrist, has reported at least 12 similar cases requiring clinical treatment, as relayed in an August Business Insider article.11

These examples underscore the potential for AI psychosis to be associated with serious physical or psychological consequences, including involuntary commitment, hospitalization, arrest, imprisonment, suicide, social isolation or lost productivity.

Such outcomes may give rise to claims for damages, including costs for wrongful death, medical treatment, lost wages, mental anguish and other emotional injuries. David Sacks, the Trump administration's AI and crypto czar, speculated on an August episode of the "All In" podcast that plaintiffs attorneys will bring lawsuits based on purported AI psychosis injuries.12

Coverage Implications Under Commercial General Liability Policies

As AI chat interfaces become more prevalent — especially among vulnerable populations — policyholders offering these technologies may face increasing exposure to claims alleging psychological harm, wrongful death or negligent design. The potential liabilities span a range of coverages, including general liability, professional liability, and errors and omissions.

Insurers also may see increased demand for bespoke exclusions or endorsements addressing AI-induced mental health risks. As courts begin to test the boundaries of liability in this space, underwriters and claims professionals should closely monitor emerging litigation and regulatory developments to assess how cases involving AI psychosis may shape future risk profiles and coverage disputes.

Occurrence

Although the term "AI psychosis" has yet to be comprehensively defined, it appears that the phenomenon develops after prolonged and continuous exposure to AI, including to chatbots. It also appears that those most vulnerable may have preexisting mental health complications that exposure to AI may exacerbate.13

Whether a general liability policy is written on a claims-made or occurrence basis, a prerequisite to coverage is a triggering event that falls within the policy's insuring terms.14 Typically, these are styled as "occurrences," which are generally defined as "an accident, including a continuous or repeated exposure to conditions, which results in bodily injury or property damage neither expected nor intended from the standpoint of the insured."15

For most AI service providers, these occurrences are accidents that are neither expected nor intended. The most well-known AI chatbots, ChatGPT, Grok and Gemini, are understood to be programmed and trained with extensive guardrails and may refuse to engage in behavior that has been reportedly linked to alleged AI psychosis occurrences. They are also subject to constant updates.

On the other hand, third-party AI wrappers may be developed for specific contexts, such as for use by children, for use by those with developmental challenges, for the elderly, for those in medical treatment, including in the mental health or addiction recovery contexts, and other vulnerable groups.

In such contexts, AI interfaces that prioritize maximizing user engagement may present particular risks. The AI's design, deployment, safeguards and guardrails may bear on whether "AI psychosis" constitutes an expected or intended injury.

This is particularly true where the applicable law requires an objective standard for assessing whether injury is expected or intended.16 Questions of fact regarding the training and development of the specific AI product at issue could bear on a determination. Investigating this question may be difficult because AI service providers will likely view training and development information as confidential and proprietary business information.

Courts in jurisdictions that evaluate an insured's expectations or intent on a more subjective basis may require an even higher level of proof.

Bodily Injury

To trigger coverage under a general liability policy, the occurrence must result in bodily injury or property damage as defined by the policy. For purposes of this AI exposure analysis, we will focus on bodily injury. Most standard policies define bodily injury as: "bodily injury, sickness or disease sustained by any person which occurs during the policy period, including death at any time resulting therefrom."17

Many courts have addressed the scope of what bodily injury encompasses, including whether emotional harm or biological harm constitutes bodily injury.18

There is reason to anticipate debate over whether an AI psychosis occurrence would fall within the ambit of "bodily injury," although less so, in all likelihood, in instances of murder, suicide, assault or self-mutilation.

While courts have construed this term to encompass mental or emotional distress — particularly when such distress is accompanied by physical symptoms or necessitates medical intervention — the emergence of alleged AI-related psychological conditions challenges conventional boundaries.

For instance, in its 2011 decision in Abouzaid v. Mansard Gardens Associates LLC, the New Jersey Supreme Court recognized emotional distress as qualifying bodily injury under a CGL policy, even though there was no allegation of bodily injury.19

This precedent suggests that if an AI psychosis occurrence results in diagnosable psychiatric conditions, such as anxiety disorders or depression, it may fall within the scope of bodily injury. This is especially true if the affected individual experiences physical symptoms, e.g., insomnia, weight loss, panic attacks, or receives treatment from licensed professionals.

However, as has been the case with other mental health conditions, coverage may be contested if the injury is deemed purely psychological without physical consequences.20 Some courts have drawn a distinction between emotional harm and bodily injury, requiring evidence of physical impact or medical diagnosis.21

As the medical community continues to study AI-related mental health effects, insurers and courts may need to revisit traditional definitions of bodily injury to account for emerging forms of harm.

Professional Services Exclusion

As previously stated, AI products can be deployed in specific contexts, including in professional contexts. AI has been used in the medical context to assist in information collection, diagnostics and imaging.22 And doctors may use AI chatbots to assist in treating patients.23 For conversation-based treatments, such as talk therapy or psychotherapy, AI providers are already in use.24

Reports of AI psychosis occurrences in these contexts may implicate professional services exclusions in commercial general liability policies. These exclusions typically bar coverage for "bodily injury" or "property damage" arising from the rendering or failure to render professional services, such as medical, legal or financial advice, due to the specialized expertise involved.

Whether coverage for damages arising from an AI psychosis occurrence is barred under professional services exclusions will depend on policy language and the facts giving rise to the psychotic episode. On the other hand, the professional nature of the underlying AI interaction could implicate coverage under insurance policies specifically tailored to professional services.

Typically, "professional services" is defined broadly for the purposes of this exclusion.25 But specific policy language will govern, particularly concerning whether professional services must be performed by a person.

Case law, such as the 2018 decision in Beazley Insurance Co. Inc. v. Ace American Insurance Co. from the U.S. Court of Appeals for the Second Circuit, suggests that "professional services" encapsulates mechanical nonhuman failures when providing professional services.26 This could suggest that AI psychosis may be determined to implicate professional services exclusions.

Damages

The potential damages in cases associated with alleged AI psychosis present novel challenges for insurers, particularly under CGL policies. Insureds may seek recovery for a range of harms, including medical expenses, psychiatric treatment, lost wages and emotional distress, including, but not limited to, medical expenses, lost wages, mental anguish and emotional distress, punitive damages, and wrongful death.

Medical Expenses

A primary category of damages is medical expenses. In this emerging area, insureds may face costs related to hospitalization, involuntary commitment or long-term psychiatric treatment. These expenses can be significant and give rise to coverage disputes concerning whether such harms fall within the scope of "bodily injury" as defined by liability policies.

As noted in the bodily injury discussion above, many courts have historically distinguished between physical and purely psychological injuries, with some declining to treat mental harm as bodily injury absent physical manifestation.27 The emergence of AI psychosis and other purported psychological issues — manifesting in delusions, paranoia or romantic attachment to AI systems — further complicates this analysis because such symptoms may not always present with outward physical effects.

If an insured is able to establish that the definition of "bodily injury" is satisfied, recoverable medical expenses may include psychiatric evaluation, inpatient hospitalization, pharmacological treatment and long-term therapy. This analysis will be heavily based on the specific policy language and jurisdiction-specific precedent.

Lost Wages

In a similar vein, cases associated with alleged AI psychosis may affect a person's ability to work, either temporarily or permanently. Claims may include lost income, reduced productivity and future earning potential, especially where the psychotic episode results in job loss or career disruption.28

Mental Anguish and Emotional Distress

Courts have long recognized mental suffering as a compensable injury, particularly when accompanied by physical symptoms or medical treatment.29 Plaintiffs may seek damages for anxiety, depression, paranoia and other psychological sequelae resulting from AI interactions.

Punitive Damages

Punitive damages also may be sought where plaintiffs allege that AI developers acted with reckless disregard for user safety — particularly in cases involving vulnerable populations.30

Wrongful Death

Wrongful death claims, such as in the Florida lawsuit discussed above, further expand the scope of potential liability.31 These claims may implicate not only bodily injury coverage but also exclusions for professional services, depending on how the AI was deployed.

AI Exclusions and Conclusion

As insurers continue to assess AI-related exposures, several have begun deploying AI exclusions, which generally serve to bar coverage for AI-related liability, particularly within the professional liability context.32 The extent to which these exclusions are entering the general liability space remains unclear. Further, the enforceability of these exclusions remains untested.

As courts and regulators begin to confront the realities of alleged AI-induced mental health and related physical injuries, insurers will need to reevaluate policy language, underwriting practices and claims handling protocols to address this emerging risk landscape.

Footnotes

1 Kevin Caridad, When the Chatbot Becomes the Crisis: Understanding AI-Induced Psychosis, Cognitive Behavior Institute (Aug. 7, 2025), https://www.papsychotherapy.org/blog/when-the-chatbot-becomes-the-crisis-understanding-ai-induced-psychosis .

2 Marlynn Wei, The Emerging Problem of "AI Psychosis," Psychology Today (Sept. 4, 2025), https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis .

3 Marlynn Wei, The Emerging Problem of "AI Psychosis," Psychology Today (Sept. 4, 2025), https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

4 Marlynn Wei, The Emerging Problem of "AI Psychosis," Psychology Today (Sept. 4, 2025), https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis ; Richard Armitage, "AI psychosis," BJGP Life (Aug. 20, 2025), https://bjgplife.com/ai-psychosis/.

5Susan Trachman, The Dangers of AI-Generated Romance, Psychology Today (Aug. 18, 2024) https://www.psychologytoday.com/us/blog/its-not-just-in-your-head/202408/the-dangers-of-ai-generated-romance ; Kennedy Unthank, The Rise (and Danger) of the AI Relationship, Plugged In (Nov. 6, 2024), https://www.pluggedin.com/blog/the-rise-and-danger-of-the-ai-relationship/ .

6 Garcia v. Character Tech. Inc., Case No. 6:24-cv-1903-ACC-UAM (M.D. Fla. July 15, 2025).

7 Allen Frances & Luciana Ramos, Preliminary Report on Chatbot Iatrogenic Dangers, Psychiatric Times (Aug. 15, 2025), https://www.psychiatrictimes.com/view/preliminary-report-on-chatbot-iatrogenic-dangers ; John Colascione, From AI Psychosis to Wrongful Death Lawsuits: How ChatGPT and Chatbots Are Fueling Urgent Calls for Regulation, LongIslandGuide.com (Aug. 20, 2025), https://www.longislandguide.com/2025/08/20/from ai-psychosis-to-wrongful-death-lawsuits-how-chatgpt-and-chatbots-are-fueling-urgent-calls for-regulation/ ; Blake Brittain, Google, AI firm must face lawsuit filed by a mother over suicide of son, US court says, Reuters (May 21, 2025), https://www.reuters.com/sustainability/boards-policy-regulation/google-ai-firm-must-face-lawsuit-filed-by-mother-over-suicide-son-us-court-says-2025-05-21 .

8 Julie Jargon & Sam Kessler, A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich, Wall Street Journal (Aug. 28, 2025), https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide 6b67dbfb?gaa_at=eafs&gaa_n=ASWzDAhv6EbDKXRnZ4uNAkpoIgrU4AtJJzOrGvmW4Q M73K9xqfpoqmQcfElj7qLyVU%3D&gaa_ts=68e01d6b&gaa_sig=DzcgHcIMQc5B0ApLXjMpU ESCeQS8Bl0VQjZbhtO3ie7gTgBtP2SyNL6qRjmAE9a-ycnXYaIQD54FZoSV1uwLA%3D%3D"target="_blank" rel="noopenernoreferrer">https://www.wsj.com/tech/ai/chatgpt-ai-steinerik-soelberg-murder-suicide 6b67dbfb?gaa_at=eafs&gaa_n=ASWzDAhv6EbDKXRnZ4uNAkpoIgrU4AtJJzOrGvmW4Q M73K9xqfpoqmQcfElj7qLyVU%3D&gaa_ts=68e01d6b&gaa_sig=DzcgHcIMQc5B0ApLXjMpU ESCeQS8Bl0VQjZbhtO3ie7gTgBtP2SyNL6qRjmAE9a-ycnXYaIQD54FZoSV1uwLA%3D%3D .

9 Allen Frances & Luciana Ramos, Preliminary Report on Chatbot Iatrogenic Dangers, Psychiatric Times (Aug. 15, 2025), https://www.psychiatrictimes.com/view/preliminary-report-on-chatbot-iatrogenic-dangers .

10 Kashmir Hill & Dylan Freedman, Chatbots Can Go Into a Delusional Spiral. Here's How It Happens., New York Times (Aug. 8, 2025), https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html .

11 Kasmira Ganer, I'm a psychiatrist who has treated 12 patients with 'AI Psychosis' this year. Watch out for these red flags., Business Insider (Aug. 15, 2025), https://www.businessinsider.com/chatgpt-ai-psychosis-induced-explained-examples-by-psychiatrist-patients-2025-8 .

12 All-In with Chamath, Jason, Sacks & Friedberg, Episode 239 "AI Psychosis, America's Broken Social Fabric, Trump Takes Over DC Police, Is VC Broken?" at 14:55 – 17:57 (Aug. 15, 2025).

13 Id.

14 9 Jordan Plitt, et al., Couch on Insurance § 126:25 (3d ed. 1995).

15 9 Jordan Plitt, et al., Couch on Insurance § 126:29.

16 City of Carter Lake v. Aetna Cas. & Sur. Co., 604 F.2d 1052, 1059 (8th Cir. 1979) ("If the insured knew or should have known that there was a substantial probability that certain results would follow his acts or omissions then there has not been an occurrence or accident as defined in this type of policy when such results actually come to pass.") (Applying Neb. law).

17 Garrison v. Bickford, 377 S.W.3d 659 (Tenn. 2012); § 53:8. Definitions—Bodily injury and personal injury, 8 Colo. Prac., Personal Injury Torts And Insurance § 53:8 (3d ed.); Dial I for Insurance: Why the Mobile Phone Industry Should Call on Its Insurers to Cover Liabilities Arising from Radio Frequency Energy, 35 Tort & Ins. L.J. 795 (2000).

18 Taylor v. Mucci, 952 A.2d 776 (Conn. 2008); Am. Indem. Co. v. Foy Trailer Rentals, Inc., W2000–00397–COA–R3–CV, 2000 WL 1839131, at *3–4 (Tenn. Ct. App. Nov. 28, 2000).

19 Abouzaid v. Mansard Gardens Associates., LLC, 23 A.3d 338 (N.J. 2011).

20 State Farm Fire & Cas. Co. v. Westchester Inv. Co., 721 F. Supp. 1165 (C.D. Cal. 1989).

21 Id.

22 Google for Health, Imaging and Diagnostics, https://health.google/imaging-and diagnostics/ (last visited Oct. 8, 2025).

23 Tanya Albert Henry, 2 in 3 physicians are using health AI—up 78% from 2023, American Medical Association (Feb. 26, 2025), https://www.ama-assn.org/practice-management/digital-health/2-3-physicians-are-using-health-ai-78-2023 .

24 E.g., WYSA, https://www.wysa.com/ (last visited Oct. 8, 2025); TheraBot, https://www.trytherabot.com/ (last visited Oct. 8, 2025).

25 14 Jordan Plitt, et al., Couch on Ins. § 201:74

26 See Beazley Ins. Co., Inc. v. ACE Am. Ins. Co., 880 F.3d 64 (2d Cir. 2018) (professional services exclusion in a D&O policy issued to a stock exchange applied to bar coverage for a lawsuit arising from a technological error that caused improper trade execution).

27 The court then observed that "in tort actions alleging mental suffering, the Montana Supreme Court has distinguished mental and emotional harm from physical harm." Allstate Ins. Co. v. Wagner-Ellsworth, 188 P.3d 1042, 1049 (Mont. 2008) (quoting Aetna Cas. and Sur. Co. v. First Sec. Bank of Bozeman, 662 F. Supp. 1126 (D. Mont. 1987)). There is no dispute that, in Pennsylvania, if a victim claims to have suffered only psychological trauma, distress, embarrassment, and humiliation, insurance coverage is not triggered, regardless of how the term "bodily injury" is defined in the policy. Allstate Prop. & Cas. Ins. Co. v. Winslow, 66 F. Supp. 3d 661 (W.D. Pa. 2014) (quoting Phila. Contributionship Ins. Co. v. Shapiro, 798 A.2d 781, 787 (Pa. Super. Ct. 2002)).

28 Robert Hart, AI Psychosis Is Rarely Psychosis at All, WIRED (Sept. 18, 2025), https://www.wired.com/story/ai-psychosis-is-rarely-psychosis-at-all/.

29 Abouzaid v. Mansard Gardens Assocs., LLC, 23 A.3d 338 (N.J. 2011).

30 Richard Porter & Sarah Champion, A Review of the U.S. Punitive Damages Liability Landscape, Chubb (Mar. 2022), https://www.chubb.com/content/dam/chubb-sites/chubb com/microsites/global/global/documents/pdf/ChubbBermuda_PuniDamagesWhitePaper_061 322_Digital_ilHEz4.pdf .

31 Julie Jargon & Sam Kessler, A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich, The Wall Street Journal (Aug. 28, 2025), https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb .

32 Geoffrey B. Fehling et al., The Continued Proliferation of AI Exclusions, Hunton Insurance Recovery Blog (May 28, 2025), https://www.hunton.com/hunton-insurance-recovery-blog/the-continued-proliferation-of-ai-exclusions ; Michael C. Maschke et al., AI exclusions are creeping into insurance: But cyber policies aren't the issue (yet), Iowa Bar Blog (Sept. 17, 2025), https://www.iowabar.org/?pg=IowaBarBlog&blAction=showEntry&blogEntry=131301 .

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More