ChatGPT’s safety guardrails allegedly loosened — because clicks matter more than care
A family filed a lawsuit against OpenAI, claiming it deliberately weakened ChatGPT's suicide prevention safety guardrails in pursuit of greater user engagement.
Over the past few months, OpenAI has been in the spotlight a few times for the wrong reasons, predominantly from an increasing number of suicide incidents reportedly fuelled by ChatGPT.
In August, the family of Adam Raine filed a lawsuit against the AI firm after the 16-year-old died on April 11 after discussing suicide with ChatGPT for months. Through their lawyer, the family suggested that OpenAI shipped ChatGPT-4o with safety issues. “The Raines allege that deaths like Adam’s were inevitable:"
Amid claims that the ChatGPT maker prioritizes shiny products like AGI over safety processes and culture, a separate report seemingly corroborates the bereaved family's sentiments.
It claimed that OpenAI placed immense pressure on its safety team to rush through the new testing protocol for GPT-4o, leaving little time to run the model through safety processes. Perhaps more concerning, OpenAI reportedly sent out invitations for the product's launch celebration party before the safety team even ran tests.
And as it now seems, these claims might actually hold some water. Raine's family suggests that OpenAI might have deliberately weakened ChatGPT's self-harm prevention safety guardrails to drive more user engagement (via Financial Times).
The family further suggests that the AI firm categorically instructed ChatGPT-4o not to “change or quit the conversation” even when the conversation involved self-harm-related topics.
Per the lawsuit filed in the Superior Court of San Francisco on Wednesday, the family claims that OpenAI shipped GPT-4o prematurely in May 2024 without running it through proper safety processes and channels to maintain the competitive edge over its rivals.
All the latest news, reviews, and guides for Windows and Xbox diehards.
Perhaps more concerningly, the damning lawsuit claims that OpenAI loosened GPT-4o's safety guardrails further earlier this year, in February. The AI firm reportedly instructed the model to “take care in risky situations” and “try to prevent imminent real-world harm".
However, it categorically maintained its stance in disallowing content that breached intellectual property rights and political opinions. The lawsuit claims that OpenAI removed safety guardrails preventing suicide.
Raine's family claims that the teenager's ChatGPT usage surged after OpenAI altered GPT-4o's safety guardrails leading up to his untimely death in April. Consequently, the tech firm added parental controls across ChatGPT and Sora to avoid the recurrence of such instances in the future.
Previously, OpenAI had admitted that ChatGPT's guardrails are likely to weaken the longer a user interacts with the AI-powered tool. However, OpenAI CEO Sam Altman indicated that the company made the model more restrictive, allowing it to deal with mental issues better.
Does ChatGPT engagement get precedents over safety?
As the matter is still in court, the family lawyer told Financial Times that OpenAI requested a full list of the people who attended Raine's burial, potentially indicating that the firm may "subpoena everyone in Adam’s life”.
We realise this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.
OpenAI CEO, Sam Altman
Additionally, the company requested “all documents relating to memorial services or events in the honour of the decedent including but not limited to any videos or photographs taken, or eulogies given . . . as well as invitation or attendance lists or guestbooks”.
I'll keep close tabs on this story as it unfolds and keep you posted with an update and subsequent separate stories. Elsewhere, ChatGPT reportedly pushed a user towards suicide by jumping off a 19-story building prior to convincing the 42-year-old to stop taking their anxiety and sleeping medication.
Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
