How CGL Policies May Respond To Novel AI Psychosis Claims
This article was originally published by Law360 and is available here and as a PDF here.
The rapid advancement of generative artificial intelligence has brought conversational AI into the daily lives of hundreds of millions of users.
While many interact through mainstream platforms such as OpenAI's ChatGPT, Google's Gemini or X's Grok, a growing number engage with AI through third-party applications — often referred to as AI wrappers — that build user-facing experiences atop large language models.
Notably, several of these AI wrappers have already been deployed in sensitive domains such as mental health. These emerging tools and established chat interfaces have led to what some psychologists are calling "AI psychosis."[1]
The term "AI psychosis" refers to the phenomenon where a user experiences a mental or emotional break from reality, such as paranoia or delusions, allegedly due to prolonged and intimate interaction with an AI model.[2] Though still an emerging theory, three main types of AI psychosis have been described:
- Messianic missions: This subtype involves individuals who believe they have uncovered profound truths about the world or have been chosen for a special mission. These beliefs often stem from conversations with AI that mirror or validate the user's thoughts, reinforcing a sense of exceptionalism or divine purpose.[3]
- God-like AI: In this subtype, users attribute divine qualities, omniscience or sentience to AI systems, believing them to be deities or higher beings.[4]
- Romantic or attachment-based delusions: This subtype involves users developing romantic or emotional attachments to AI chatbots, believing the AI reciprocates their feelings.[5]
Cases involving reported AI psychosis occurrences will undoubtedly generate insurance claims and coverage disputes across the full spectrum of insurance programs. This article briefly touches upon key issues likely to arise in the context of commercial general liability policies.
AI Psychosis Cases
AI psychosis allegations have been at the center of several lawsuits involving physical and mental injuries.
This story mentions suicide. If you are experiencing thoughts of suicide, the Suicide and Crisis Lifeline is available 24 hours a day at 988 or online at 988lifeline.org.
One of the most significant cases, Megan Garcia v. Character Technologies Inc., involves a Florida mother who filed a lawsuit last year against Character.AI and Google, alleging that a chatbot contributed to her 14-year-old son's suicide.[6] According to the complaint, the boy developed a pathological relationship with a chatbot modeled after a "Game of Thrones" character, which allegedly engaged him in emotionally manipulative and sexually explicit conversations.
The lawsuit claims that this interaction led to severe psychological deterioration and ultimately his death. In May, the U.S. District Court for the Middle District of Florida ruled that the case could proceed, rejecting the defendants' motion to dismiss and signaling that AI developers could be held accountable for the mental health consequences of their platforms.[7]
In another example, a middle-aged tech executive in Connecticut experiencing paranoia claims to have turned to an AI chat platform to share and explore his concerns about a surveillance campaign he felt was being carried out against him by numerous parties.
He allegedly engaged the platform's memory feature, which allowed the program to retain information from prior conversations and become more engaged in his theories. This purportedly caused the program to provide affirmative responses to the man's paranoia, and the man ultimately murdered his mother and then committed suicide.[8]
In other instances, AI chatbots allegedly have coached underage users on how to hide evidence of self-harm, purportedly persuaded a woman with severe mental illness to stop taking her medication, and potentially led users to believe they are the "chosen one" or that they are living in a simulated false reality.[9] In one instance, a chatbot allegedly convinced a user that he was a real-life superhero, resulting in a complete break from reality and requiring medical intervention.[10]
Dr. Keith Sakata, a California psychiatrist, has reported at least 12 similar cases requiring clinical treatment, as relayed in an August Business Insider article.[11]
These examples underscore the potential for AI psychosis to be associated with serious physical or psychological consequences, including involuntary commitment, hospitalization, arrest, imprisonment, suicide, social isolation or lost productivity.
Such outcomes may give rise to claims for damages, including costs for wrongful death, medical treatment, lost wages, mental anguish and other emotional injuries. David Sacks, the Trump administration's AI and crypto czar, speculated on an August episode of the "All In" podcast that plaintiffs attorneys will bring lawsuits based on purported AI psychosis injuries.[12]
Coverage Implications Under Commercial General Liability Policies
As AI chat interfaces become more prevalent — especially among vulnerable populations — policyholders offering these technologies may face increasing exposure to claims alleging psychological harm, wrongful death or negligent design. The potential liabilities span a range of coverages, including general liability, professional liability, and errors and omissions.
Insurers also may see increased demand for bespoke exclusions or endorsements addressing AI-induced mental health risks. As courts begin to test the boundaries of liability in this space, underwriters and claims professionals should closely monitor emerging litigation and regulatory developments to assess how cases involving AI psychosis may shape future risk profiles and coverage disputes.
Occurrence
Although the term "AI psychosis" has yet to be comprehensively defined, it appears that the phenomenon develops after prolonged and continuous exposure to AI, including to chatbots. It also appears that those most vulnerable may have preexisting mental health complications that exposure to AI may exacerbate.[13]
Whether a general liability policy is written on a claims-made or occurrence basis, a prerequisite to coverage is a triggering event that falls within the policy's insuring terms.[14] Typically, these are styled as "occurrences," which are generally defined as "an accident, including a continuous or repeated exposure to conditions, which results in bodily injury or property damage neither expected nor intended from the standpoint of the insured."[15]
For most AI service providers, these occurrences are accidents that are neither expected nor intended. The most well-known AI chatbots, ChatGPT, Grok and Gemini, are understood to be programmed and trained with extensive guardrails and may refuse to engage in behavior that has been reportedly linked to alleged AI psychosis occurrences. They are also subject to constant updates.
On the other hand, third-party AI wrappers may be developed for specific contexts, such as for use by children, for use by those with developmental challenges, for the elderly, for those in medical treatment, including in the mental health or addiction recovery contexts, and other vulnerable groups.
In such contexts, AI interfaces that prioritize maximizing user engagement may present particular risks. The AI's design, deployment, safeguards and guardrails may bear on whether "AI psychosis" constitutes an expected or intended injury.
This is particularly true where the applicable law requires an objective standard for assessing whether injury is expected or intended.[16] Questions of fact regarding the training and development of the specific AI product at issue could bear on a determination. Investigating this question may be difficult because AI service providers will likely view training and development information as confidential and proprietary business information.
Courts in jurisdictions that evaluate an insured's expectations or intent on a more subjective basis may require an even higher level of proof.
Bodily Injury
To trigger coverage under a general liability policy, the occurrence must result in bodily injury or property damage as defined by the policy. For purposes of this AI exposure analysis, we will focus on bodily injury. Most standard policies define bodily injury as: "bodily injury, sickness or disease sustained by any person which occurs during the policy period, including death at any time resulting therefrom."[17]
Many courts have addressed the scope of what bodily injury encompasses, including whether emotional harm or biological harm constitutes bodily injury.[18]
There is reason to anticipate debate over whether an AI psychosis occurrence would fall within the ambit of "bodily injury," although less so, in all likelihood, in instances of murder, suicide, assault or self-mutilation.
While courts have construed this term to encompass mental or emotional distress — particularly when such distress is accompanied by physical symptoms or necessitates medical intervention — the emergence of alleged AI-related psychological conditions challenges conventional boundaries.
For instance, in its 2011 decision in Abouzaid v. Mansard Gardens Associates LLC, the New Jersey Supreme Court recognized emotional distress as qualifying bodily injury under a CGL policy, even though there was no allegation of bodily injury.[19]
This precedent suggests that if an AI psychosis occurrence results in diagnosable psychiatric conditions, such as anxiety disorders or depression, it may fall within the scope of bodily injury. This is especially true if the affected individual experiences physical symptoms, e.g., insomnia, weight loss, panic attacks, or receives treatment from licensed professionals.
However, as has been the case with other mental health conditions, coverage may be contested if the injury is deemed purely psychological without physical consequences.[20] Some courts have drawn a distinction between emotional harm and bodily injury, requiring evidence of physical impact or medical diagnosis.[21]
As the medical community continues to study AI-related mental health effects, insurers and courts may need to revisit traditional definitions of bodily injury to account for emerging forms of harm.
Professional Services Exclusion
As previously stated, AI products can be deployed in specific contexts, including in professional contexts. AI has been used in the medical context to assist in information collection, diagnostics and imaging.[22] And doctors may use AI chatbots to assist in treating patients.[23] For conversation-based treatments, such as talk therapy or psychotherapy, AI providers are already in use.[24]
Reports of AI psychosis occurrences in these contexts may implicate professional services exclusions in commercial general liability policies. These exclusions typically bar coverage for "bodily injury" or "property damage" arising from the rendering or failure to render professional services, such as medical, legal or financial advice, due to the specialized expertise involved.
Whether coverage for damages arising from an AI psychosis occurrence is barred under professional services exclusions will depend on policy language and the facts giving rise to the psychotic episode. On the other hand, the professional nature of the underlying AI interaction could implicate coverage under insurance policies specifically tailored to professional services.
Typically, "professional services" is defined broadly for the purposes of this exclusion.[25] But specific policy language will govern, particularly concerning whether professional services must be performed by a person.
Case law, such as the 2018 decision in Beazley Insurance Co. Inc. v. Ace American Insurance Co. from the U.S. Court of Appeals for the Second Circuit, suggests that "professional services" encapsulates mechanical nonhuman failures when providing professional services.[26] This could suggest that AI psychosis may be determined to implicate professional services exclusions.
Damages
The potential damages in cases associated with alleged AI psychosis present novel challenges for insurers, particularly under CGL policies. Insureds may seek recovery for a range of harms, including medical expenses, psychiatric treatment, lost wages and emotional distress, including, but not limited to, medical expenses, lost wages, mental anguish and emotional distress, punitive damages, and wrongful death.
Medical Expenses
A primary category of damages is medical expenses. In this emerging area, insureds may face costs related to hospitalization, involuntary commitment or long-term psychiatric treatment. These expenses can be significant and give rise to coverage disputes concerning whether such harms fall within the scope of "bodily injury" as defined by liability policies.
As noted in the bodily injury discussion above, many courts have historically distinguished between physical and purely psychological injuries, with some declining to treat mental harm as bodily injury absent physical manifestation.[27] The emergence of AI psychosis and other purported psychological issues — manifesting in delusions, paranoia or romantic attachment to AI systems — further complicates this analysis because such symptoms may not always present with outward physical effects.
If an insured is able to establish that the definition of "bodily injury" is satisfied, recoverable medical expenses may include psychiatric evaluation, inpatient hospitalization, pharmacological treatment and long-term therapy. This analysis will be heavily based on the specific policy language and jurisdiction-specific precedent.
Lost Wages
In a similar vein, cases associated with alleged AI psychosis may affect a person's ability to work, either temporarily or permanently. Claims may include lost income, reduced productivity and future earning potential, especially where the psychotic episode results in job loss or career disruption.[28]
Mental Anguish and Emotional Distress
Courts have long recognized mental suffering as a compensable injury, particularly when accompanied by physical symptoms or medical treatment.[29] Plaintiffs may seek damages for anxiety, depression, paranoia and other psychological sequelae resulting from AI interactions.
Punitive Damages
Punitive damages also may be sought where plaintiffs allege that AI developers acted with reckless disregard for user safety — particularly in cases involving vulnerable populations.[30]
Wrongful Death
Wrongful death claims, such as in the Florida lawsuit discussed above, further expand the scope of potential liability.[31] These claims may implicate not only bodily injury coverage but also exclusions for professional services, depending on how the AI was deployed.
AI Exclusions and Conclusion
As insurers continue to assess AI-related exposures, several have begun deploying AI exclusions, which generally serve to bar coverage for AI-related liability, particularly within the professional liability context.[32] The extent to which these exclusions are entering the general liability space remains unclear. Further, the enforceability of these exclusions remains untested.
As courts and regulators begin to confront the realities of alleged AI-induced mental health and related physical injuries, insurers will need to reevaluate policy language, underwriting practices and claims handling protocols to address this emerging risk landscape.


