OpenAI tells ChatGPT users to take a break — Altman asks "What have we done?" just days before GPT-5 launch
Using ChatGPT for too long will now prompt a reminder to take a break.

The next time you talk with ChatGPT for too long, you may see a prompt to take a break. OpenAI is rolling out several features to encourage healthy use of its popular chatbot.
"You've been chatting for a while — is this a good time for a break?" reads one prompt. ChatGPT will now show reminders during long sessions.
While ChatGPT can be a useful tool, it also has potential consequences if overused. Research has indicated that overdependence on AI can negatively impact critical thinking, cause your brain to atrophy, and make you lonely.
With those risks, it's unsurprising that OpenAI would put policies in place to limit use of ChatGPT.
OpenAI discussed how AI can feel more personal and responsive than other technologies, alluding to the fact that an agreeable chatbot can contribute to a positive feedback loop.
"There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency," said OpenAI.
Earlier this year, the tech giant had to adjust ChatGPT because the tool was too agreeable.
All the latest news, reviews, and guides for Windows and Xbox diehards.
ChatGPT has now been trained to detect signs of mental or emotional distress and point people toward appropriate help. OpenAI CEO Sam Altman recently discussed why using ChatGPT as a therapist is a privacy nightmare.
While there may come a time when ChatGPT acts like a therapist, for now it makes sense for the tool to guide users toward evidence-based resources.
If you ask ChatGPT for help with personal challenges, the tool will now ask questions and encourage you to weigh pros and cons rather than giving you a direct answer. OpenAI goes as far as to say, "ChatGPT shouldn’t give you an answer" when asked about a personal challenge.
OpenAI shared that it has convened advisory groups of mental health experts and worked with over 90 physicians to better train ChatGPT to spot and respond to signs of mental or emotional distress.
Should we be afraid of ChatGPT?
OpenAI's GPT-5 is a highly anticipated AI model. Altman has raved about it for months, but he has also warned people about the technology.
In contrast to GPT-4, which Altman said "kind of sucks," GPT-5 is said to be smarter and that it "feels very fast."
But he also compared GPT-5, which could launch this month and be integrated into ChatGPT and other tools, to the Manhattan Project:
"There are moments in the history of science where you have a group of scientists look at their creation and just say, you know, ‘What have we done?’"
Altman also warned that AI is developing so rapidly it could spiral out of control. "It feels like there are no adults in the room," said the CEO.
Some have called Altman's comments marketing speak, likening them to a salesman saying, "Our prices are so low they’re scary!"
Still, AI’s rapid development poses mental and emotional health risks, which is reflected in ChatGPT’s new prompts and reminders to take a break.

Sean Endicott is a tech journalist at Windows Central, specializing in Windows, Microsoft software, AI, and PCs. He's covered major launches, from Windows 10 and 11 to the rise of AI tools like ChatGPT. Sean's journey began with the Lumia 930, leading to strong ties with app developers. Outside writing, he coaches American football, utilizing Microsoft services to manage his team. He studied broadcast journalism at Nottingham Trent University and is active on X @SeanEndicott_ and Threads @sean_endicott_.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.