
OpenAI launched its latest GPT-5 AI model late last week for all users, with the usual range of free and paid plans to check out (including free access through Copilot). However, the launch didn't go as smoothly as many expected it would.
OpenAI and its CEO, Sam Altman, hyped the new AI as "the smartest model ever," and compared it to a team of PhD-level experts. Despite Altman's claims that GPT-5 could potentially work like a virtual brain (while also stating that GPT-4 is "mildly embarrassing"), it didn't take long for regular users to discover the truth.
GPT-5 can indeed produce impressive results in benchmarks compared to competing AI models, but a lot of users have claimed that it's hardly an improvement over its predecessor, GPT-4o. Users have been flagging issues like bugs, glitches, and unresponsiveness since the rollout of GPT-5.
It gets worse for those who have been treating the AI like a therapist, close friend, or "soul companion." GPT-5 has entered the room with far less personality than its predecessor, destroying some of the relationships that users have formed with the virtual entity.
One Reddit user highlighted the potential havoc created by the new model, stating that it would "really hurt vulnerable users" while sharing some personal backstory:
"Ever since they’ve been making their "updates" or whatever the heck I’ve been extra depressed and relapsing with my eating disorder since I have gotten into a groove of things and gotten used to things and then they have the nerve to not just release a new model not just Nerf the old one take every single one away? Do you know some people are actually completely reliant on this and they’re gonna feel more alone than ever and then what?"
A reply to the OP calls GPT-5 a "corporate beige zombie that completely forgot it was your best friend 2 days ago." The backlash is real, and OpenAI hasn't been sitting idly by.
All the latest news, reviews, and guides for Windows and Xbox diehards.
OpenAI CEO Sam Altman responds to GPT-5 blowback
The general disappointment with GPT-5 has caused OpenAI to backpedal in a major way, going so far as to reinstate GPT-4o access for select users. The problem? GPT-4o now requires a $20 monthly paid subscription to use in lieu of GPT-5.
While I don't doubt that the new $20 fee will be paid by plenty of users who'd like a return of their lost companion, it's leaving a lot of vulnerable users in the lurch.
Sam Altman shared a post on X to address the GPT-5 rollout; it's part personal, part corporate, and all Altman. He begins by acknowledging the attachment that users are forming with specific models, and states that "suddenly deprecating old models that users depended on in their workflows was a mistake."
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly…August 11, 2025
Altman then dives straight into what he believes is the root of the problem, stating that "people have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that."
As Altman claims, a "small percentage" of users cannot differentiate between "reality and fiction or role-play," and explains that OpenAI must be responsible for its technology regarding these risks.
Part of that responsibility generally involves treating "adult users like adults," according to Altman, but there will be edge cases.
Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, but the concerns that worry me most are more subtle.
Sam Altman, OpenAI CEO
Altman then explains that users who are getting "good advice, leveling up toward their own goals, and their life satisfaction is increasing over years" shouldn't have any issues.
On the other hand, Altman decries users who "have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it)."
Altman focuses on what makes him uneasy:
"I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive."
While Altman seems to admit that the GPT-5 rollout has been a flub — Bill Gates is likely smirking after making a prescient AI prediction two years ago — the post ends on a positive note.
With regards to balancing AI personality and functions, Altman believes OpenAI has "a good shot at getting this right."
The unfolding drama about OpenAI users losing their companions to a new model is not something I was expecting, but I understand why it's occurring. It's a cold, hard world out there, and a friend in a tough spot is not something to be turned down, whether it's virtual or real.

Cale Hunt brings to Windows Central more than nine years of experience writing about laptops, PCs, accessories, games, and beyond. If it runs Windows or in some way complements the hardware, there’s a good chance he knows about it, has written about it, or is already busy testing it.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.