OpenAI faces lawsuit after family claims ChatGPT encouraged a teen suicide — as insiders claim GPT‑4o launch ignored safety warnings to hit a $300 billion valuation
The family of 16‑year‑old Adam Raine says ChatGPT encouraged his suicide over months of chats, and even offered to draft a note.

ChatGPT has come a long way since OpenAI launched it in late November 2022, causing a paradigm shift in the tech space with artificial intelligence chatbots, as it initially fell into regular 'hallucination' episodes or struggled to generate believable images.
Over the past few years, users have openly expressed their reluctance to interact with the AI technology, citing privacy and safety issues. Regulators have also highlighted the importance of having elaborate security measures and guardrails to prevent the technology from spiralling out of control and potentially causing existential threats to humanity.
Last week, I covered a story highlighted by The New York Times where a 42-year-old accountant had turned to ChatGPT for legal advice and help with spreadsheet management. The user gradually developed a deeper bond with the chatbot, but this took a bad turn when ChatGPT encouraged the user to kill themselves by jumping off a 19-storey building.
Prior to this, the tool had instructed the user to isolate themselves and get off their anxiety and sleeping pills to escape the 'matrix'. Luckily, the user managed to save themselves from this dangerous spiral.
However, the same can't be said about 16-year-old Adam Raine, who tragically killed himself, and the death is reportedly linked to ChatGPT. Raine's family has since filed a lawsuit against OpenAI and its co-founder and CEO, Sam Altman (via Reuters).
The bereaved family's lawyer indicated that Raine took his own life after "months of encouragement from ChatGPT." For context, Raine was interacting with ChatGPT-4o, an AI model that reportedly shipped with safety issues. The family's lawyer further indicated that the product was "rushed to market despite clear safety issues."
A separate report seemingly corroborates the lawsuit's claims. It revealed that OpenAI had placed immense pressure on its safety team to rush through the new testing protocol for GPT-4o, and they had little time to thoroughly run the model through safety processes. It's critical for sophisticated AI tools like GPT-4o to go through these kinds of thorough testing processes to identify loopholes that bad actors might exploit or would otherwise cause harm, like this unfortunate case.
All the latest news, reviews, and guides for Windows and Xbox diehards.
Perhaps more concerning, OpenAI reportedly sent out invitations for the product's launch celebration party before the safety team even ran tests. This is amid claims from several former employees that the company prioritizes "shiny products" over safety processes.
"They planned the launch after-party before knowing if it was safe to launch," the source disclosed. "We basically failed at the process."
According to Raine's family, OpenAI already knew that GPT-4o mimicked human empathy and displayed a sycophantic level of validation, which could pose a great threat to vulnerable users, especially without elaborate guardrails in place. However, OpenAI still shipped the product.
"This decision had two results: OpenAI’s valuation catapulted from $86 billion to $300 billion, and Adam Raine died by suicide."
This lawsuit further disclosed that the 16-year-old had discussed methods with ChatGPT several months before he took his life, including how to sneak alcohol from his parents' liquor cabinet and how to discreetly hide the evidence of any failed suicide attempts.
While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.
OpenAI
The chatbot reportedly guided the teenager, providing insight on whether the methods he'd mentioned would work. It even offered its assistance in drafting a suicide note for his parents. An OpenAI spokesman indicated that the company is saddened by Raine's untimely demise, passing their “deepest sympathies to the Raine family during this difficult time.”
Now that the company is reviewing the lawsuit, we'll likely learn more about the proceedings in the next few weeks. It seeks an order that will require OpenAI to verify the age of ChatGPT users, reject self-harm inquiries and requests, and warn users about the risks of psychological dependency on AI.
What is OpenAI doing to remedy the increased accusations of suicides fueled by ChatGPT?
OpenAI has admitted that its sophisticated AI systems may fall short of expectations and even bypass certain guardrails. It further indicated that it is working on integrating stronger rules around sensitive content and risky behaviours for users under 18.
As the back and forth grows, parts of the model’s safety training may degrade. For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.
OpenAI
This news comes after Microsoft's AI CEO, Mustafa Suleyman, recently indicated the potential emergence of conscious AI. The executive indicated the importance of building AI for people, not transforming the digital tool into a person, further reiterating the importance of having elaborate guardrails in place to prevent such an occurrence, which may seemingly provide humanity with the upper hand and control over the technology.
According to the Raine family's lawyer:
“The Raines allege that deaths like Adam’s were inevitable: they expect to be able to submit evidence to a jury that OpenAI’s own safety team objected to the release of 4o, and that one of the company’s top safety researchers, Ilya Sutskever, quit over it. The lawsuit alleges that beating its competitors to market with the new model catapulted the company’s valuation from $86bn to $300bn.”
I'm keeping a close eye on the situation as it unfolds and will update this article with new information and separate follow-ups where appropriate.

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.