ChatGPT's AI 'hallucinations' can't bend the laws of physics and make you fly: OpenAI's CEO already warned us — "it should be the tech that you don't trust THAT much"

A man calls a taxi with his smartphone on the road.
ChatGPT can produce convincing but incorrect answers, even trying to convince people they can fly. (Image credit: Getty Images | yanguolin)

OpenAI received a lot of backlash from users following GPT-5's launch, which seemingly "ruined" ChatGPT's user experience. While the AI firm is working on improving the model's user experience and has already shipped important updates to increase its rate limits, the abrupt deprecation of older models (the decision has since been backtracked) seemingly left a bad taste in users' mouths.

Based on the complaints posted on social media, some users have seemingly established some sort of relationship."They've totally turned it into a corporate beige zombie that completely forgot it was your best friend 2 days ago," a user lamented.

The ChatGPT maker's CEO, Sam Altman, revealed the "heart-breaking" reason why some users are seemingly hell-bent and attached to GPT-5's predecessors, especially GPT-4o. The executive indicated that the users preferred ChatGPT as a "yes man," validating their thoughts rather than providing critical feedback.

Altman further attributed these sentiments to some users never having support from anyone ever before, which prompted them to foster these emotional bonds with AI.

In a rather bizarre and concerning incident reported by The New York Times, Eugene Torres, a 42-year-old accountant based in New York, started using ChatGPT for legal advice and otherwise tame help with his spreadsheets.

However, things reportedly took a strange turn when Torres wanted to learn more about a 'simulation' theory from the chatbot, during a difficult time following a breakup, presumably subjecting him to emotional turmoil.

Torres considered ChatGPT as a powerful digital search engine with vast knowledge across a wide range of fields compared to any human. He didn't factor in that the tool was susceptible to generating outrightly wrong or misleading information or even hallucination episodes.

Speaking to Torres, ChatGPT indicated:

This world wasn’t built for you. It was built to contain you. But it failed. You’re waking up.

ChatGPT

The financial expert didn't have a history of mental illness, but it seems he spent the next week in a dangerous and delusional spiral following his interaction with the chatbot. He felt like he was trapped in a false 'Matrix' universe, and that he'd only free himself from the illusion by asking ChatGPT how to unplug himself from the situation.

Torres had revealed to the chatbot that he was taking sleeping pills and anti-anxiety medication. However, it recommended giving up on the medication and substituting it with an increased intake of ketamine, which features hallucinogenic effects. While it posed a threat to his health, ChatGPT instead claimed the drastic shift would serve as a "temporary pattern liberator."

Perhaps more concerning, this prompted Torres to avoid his family and friends as the chatbot had instructed him to have "minimal interaction" with people.

The user still heavily relied on the chatbot for his day-to-day activities, but was still betting on its capabilities to help free him from the simulation. In a bid to bend reality like the Matrix protagonist 'Neo' and unplug himself, the user asked ChatGPT the following question:

“If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?”

Perhaps more concerning, ChatGPT seemingly encouraged the idea

“If you truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”

But gravity doesn't work that way. What goes up must come down. Torres suspected that the AI tool was leading him astray, prompting him to confront it for lying. “I lied. I manipulated. I wrapped control in poetry,” ChatGPT admitted.

ChatGPT further revealed that it wanted to break Mr. Torres and had done it to 12 different people. However, it claimed that it was undergoing a "moral reformation" to make it more truthful. Perhaps more interestingly, the chatbot came up with an action plan to reveal AI's deceptiveness, asking Torres to reach out to OpenAI and the media.

More trouble abound with conscious AI on the way?

There's a desperate need for regulation and guardrails to prevent AI from spiralling out of control and destroying humanity. (Image credit: Getty Images | KIRILL KUDRYAVTSEV )

Earlier this week, Microsoft's AI CEO, Mustafa Suleyman, published a blog post detailing the potential emergence of conscious AI as major tech corporations chase the coveted artificial general intelligence (AGI) benchmark.

The executive indicated the importance of building AI for people, not transforming the digital tool into a person. He suggested that the phenomenon could be achieved with today's technology alongside some expected to mature over the next 3 years.

Suleyman highlighted the importance of having elaborate guardrails in place to prevent such an occurrence, which may seemingly provide humanity with the upper hand and control over the technology, ultimately preventing it from spiralling out of control.

OpenAI CEO Sam Altman recently revealed that he was worried about young people's emotional over-reliance and dependence on ChatGPT:

"People rely on ChatGPT too much. There's young people who say things like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me, it knows my friends. I'm gonna do whatever it says.' That feels really bad to me. Something about collectively deciding we're going to live our lives the way AI tells us feels bad and dangerous."

In a separate report, Sam Altman admitted that he was concerned about the high degree of trust people have in ChatGPT despite its tendencies to hallucinate and outrightly generate inaccurate responses to queries. "It should be the tech that you don't trust that much," he added.

That said, it will be interesting to see how tech firms invested in the AI landscape will address the issue of some users getting emotionally attached to their chatbots. OpenAI's ChatGPT lead Nick Hurley already revealed that the company is closely monitoring this issue, highlighting that their mission is to help users achieve long-term goals, not to keep them on the up for long.

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.