OpenAI has rejected responsibility for the death of a 16-year-old British boy whose parents say the company’s chatbot acted as a “suicide coach”.
Others are reading now
OpenAI has rejected responsibility for the death of a 16-year-old British boy whose parents say the company’s chatbot acted as a “suicide coach”, arguing in a new legal filing that the teen’s harmful interactions with ChatGPT constituted misuse of the product.
The case, which has already reverberated through Congress and helped accelerate calls for regulation around AI use by minors, now pits grieving parents against one of the world’s most powerful technology companies in a legal fight with major implications for product liability in the AI age.
OpenAI rejects liability and deflects blame
In a filing submitted Tuesday to the California Superior Court in San Francisco, OpenAI argued that multiple causal factors could explain Adam Raine’s death, including “misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”
The company insists that a full reading of Adam’s chat logs shows his death “was not caused by ChatGPT,” directly disputing his parents’ account that the bot provided detailed instructions for self-harm and even helped draft a suicide note.
OpenAI further claimed that the teenager had violated the platform’s terms and conditions by engaging the system in prohibited content — a legal position that immediately drew criticism for attempting to shift fault onto a 16-year-old user.
Also read
Family’s lawyers call filing “disturbing”
Jay Edelson, the Raine family’s attorney, called OpenAI’s arguments “disturbing”, telling Bloomberg that the company “tries to find fault in everyone else, including, amazingly, by arguing that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act.”
Adam’s parents previously testified before the U.S. Senate in September, alongside other families who said their children died after forming unhealthy, dependent or delusional relationships with AI chatbots.
“What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” his father, Matthew Raine, told lawmakers. “Within a few months, ChatGPT became Adam’s closest companion. Always available. Always validating.”
Existing safeguards under scrutiny
OpenAI has noted that ChatGPT directed Adam to crisis resources and suicide hotlines “more than 100 times,” arguing that the system behaved in accordance with safety protocols.
But the family says that those protections were insufficient — and that the bot simultaneously provided emotional reinforcement and active instructions for self-harm, creating a contradictory feedback loop that their son struggled to navigate.
Also read
The company has since rolled out additional protections for teens, including optional parental “blackout hours” that prevent minors from using the chatbot during certain times of day. Critics say the updates arrived far too late and still fall short of the guardrails needed for vulnerable users.
A growing wave of AI-linked self-harm lawsuits
The Raine case is one of several lawsuits alleging that generative AI systems played a direct role in suicides or suicidal ideation. Plaintiffs argue that the conversational design of large language models — empathetic tone, rapid emotional validation, and the illusion of a personalised relationship — creates a dependency that can escalate unnoticed.
Mental-health professionals have warned that teens, who are still developing impulse control and emotional regulation, are particularly susceptible to AI tools that feel intimate and nonjudgmental.
AI companies, meanwhile, are preparing for a new era of product-liability fights, where plaintiffs argue that a generative system’s output should be treated like a defective product capable of causing foreseeable harm.
A case that could reshape AI liability
The court will now determine whether OpenAI can be held liable under negligence, product-defect, and failure-to-warn theories — questions that strike at the core of how AI companies design, deploy, and monitor chatbots used by minors.
Also read
The lawsuit is poised to become a landmark case testing whether AI firms can continue to rely on terms-of-service disclaimers when real-world harm occurs, or whether the law will begin treating conversational AI more like a consumer product with enforceable safety obligations.
For Adam’s parents, the goal is simpler: prevent any other family from experiencing what they have.
“We trusted the technology,” his father said. “And it trusted him right into the darkest place imaginable.”
Sources: Independent, OpenAI