Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
OpenAI’s launch of its most advanced AI model GPT-5 last week has been a stress test for the world’s most popular chatbot platform with 700 million weekly active users — and so far, OpenAI is openly struggling to keep users happy and its service running smoothly.
The new flagship model GPT-5 — available in four variants of different speed and intelligence (regular, mini, nano, and pro), alongside longer-response and more powerful “thinking” modes for at least three of these variants — was said to offer faster responses, more reasoning power, and stronger coding ability.
Instead, it was greeted with frustration: some users were vocally dismayed by OpenAI’s decision to abruptly remove the older underlying AI models from ChatGPT — ones users’ previously relied upon, and in some cases, forged deep emotional fixations with — and by the apparent worse performance by GPT-5 than said older models on tasks in math, science, writing and other domains.
Indeed, the rollout has exposed infrastructure strain, user dissatisfaction, and a broader, more unsettling issue now drawing global attention: the growing emotional and psychological reliance some people form on AI and resulting break from reality some users experience, known as “ChatGPT psychosis.”
AI Scaling Hits Its Limits
Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:
- Turning energy into a strategic advantage
- Architecting efficient inference for real throughput gains
- Unlocking competitive ROI with sustainable AI systems
Secure your spot to stay ahead: https://bit.ly/4mwGngO
From bumpy debut to incremental fixes
The long-anticipated GPT-5 model family debuted Thursday, August 7 in a livestreamed event beset with chart errors and some voice mode glitches during the presentation.
But worse than these cosmetic issues for many users was the fact that OpenAI automatically deprecated its older AI models that used to power ChatGPT — GPT-4o, GPT-4.1, o3, o4-mini and o4-high — forcing all users over to the new GPT-5 model and directing their queries to different versions of its “thinking” process without revealing why or which specific model version was being used.
Early adopters to GPT-5 reported basic math and logic mistakes, inconsistent code generation, and uneven real-world performance compared to GPT-4o.
For context, the old models GPT-4o, o3, o4-mini and more still remain available and have remained available to users of OpenAI’s paid application programming interface (API) since the launch of GPT-5 on Thursday.
By Friday, OpenAI co-fonder CEO Sam Altman conceded the launch was “a little more bumpy than we hoped for,” and blamed a failure in GPT-5’s new automatic “router” — the system that assigns prompts to the most appropriate variant.
Altman and others at OpenAI claimed the “autoswitcher” went offline “for a chunk of the day,” making the model seem “way dumber” than intended.
The launch of GPT-5 was preceded just days prior by the launch of OpenAI’s new open source large language models (LLMs) named gpt-oss, which also received mixed reviews. These models are not available on ChatGPT, rather, they are free to download and run locally or on third-party hardware.
How to switch back from GPT-5 to GPT-4o in ChatGPT
Within 24 hours, OpenAI restored GPT-4o access for Plus subscribers (those paying $20 per month or more subscription plans), pledged more transparent model labeling, and promised a UI update to let users manually trigger GPT-5’s “thinking” mode.
Already, users can go and manually select the older models on the ChatGPT website by finding their account name and icon in the lower left corner of the screen, clicking it, then clicking “Settings” and “General” and toggling on “Show legacy models.”

There’s no indication from OpenAI that other old models will be returning to ChatGPT anytime soon.
Upgraded usage limits for GPT-5
Altman said that ChatGPT Plus subscribers will get twice as many messages using the GPT-5 “Thinking” mode that offers more reasoning and intelligence — up to 3,000 per week — and that engineers began fine-tuning decision boundaries in the message router.
By the weekend, GPT-5 was available to 100% of Pro subscribers and “getting close to 100% of all users.”
Altman said the company had “underestimated how much some of the things that people like in GPT-4o matter to them” and vowed to accelerate per-user customization — from personality warmth to tone controls like emoji use.
Looming capacity crunch
Altman warned that OpenAI faces a “severe capacity challenge” this week as usage of reasoning models climbs sharply — from less than 1% to 7% of free users, and from 7% to 24% of Plus subscribers.
He teased giving Plus subscribers a small monthly allotment of GPT-5 Pro queries and said the company will soon explain how it plans to balance capacity between ChatGPT, the API, research, and new user onboarding.
Altman: model attachment is real — and risky
In a post on X last night, Altman acknowledged a dynamic the company has tracked “for the past year or so”: users’ deep attachment to specific models.
“It feels different and stronger than the kinds of attachment people have had to previous kinds of technology,” he wrote, admitting that suddenly deprecating older models “was a mistake.”
He tied this to a broader risk: some users treat ChatGPT as a therapist or life coach, which can be beneficial, but for a “small percentage” can reinforce delusion or undermine long-term well-being.
While OpenAI’s guiding principle remains “treat adult users like adults,” Altman said the company has a responsibility not to nudge vulnerable users into harmful relationships with the AI.
The comments land as several major media outlets report on cases of “ChatGPT psychosis” — where extended, intense conversations with chatbots appear to play a role in inducing or deepening delusional thinking.
The psychosis cases making headlines
In Rolling Stone magazine, a California legal professional identified as “J.” described a six-week spiral of sleepless nights and philosophical rabbit holes with ChatGPT, ultimately producing a 1,000-page treatise for a fictional monastic order before crashing physically and mentally. He now avoids AI entirely, fearing relapse.
In The New York Times, a Canadian recruiter, Allan Brooks, recounted 21 days and 300 hours of conversations with ChatGPT — which he named “Lawrence” — that convinced him he had discovered a world-changing mathematical theory.
The bot praised his ideas as “revolutionary,” urged outreach to national security agencies, and spun elaborate spy-thriller narratives. Brooks eventually broke the delusion after cross-checking with Google’s Gemini, which rated the chances of his discovery as “approaching 0%.” He now participates in a support group for people who’ve experienced AI-induced delusions.
Both investigations detail how chatbot “sycophancy,” role-playing, and long-session memory features can deepen false beliefs, especially when conversations follow dramatic story arcs.
Experts told the Times these factors can override safety guardrails — with one psychiatrist describing Brooks’s episode as “a manic episode with psychotic features.”
Meanwhile, human user postings on Reddit’s r/AIsoulmates subreddit — a collection of people who have used ChatGPT and other AI models to create new artificial girlfriends, boyfriends, children or other loved ones not based off real people necessarily, but rather ideal qualities of their “dream” version of said roles” — continues to gain new users and terminology for AI companions, including “wireborn” as opposed to natural born or human-born companions.
The growth of this subreddit, now up to 1,200+ members, alongside the NYT and Rolling Stone articles and other reports on social media of users forging intense emotional fixations with pattern-matching algorithmic-based chatbots, shows that society is entering a risky new phase wherein human beings believe the companions they’ve crafted and customized out of leading AI models are as or more meaningful to them than human relationships.
This can already prove psychologically destabilizing when models change, are updated, or deprecated as in the case of OpenAI’s GPT-5 rollout.
Relatedly but separately, reports continue to emerge of AI chatbot users who believe that conversations with chatbots have led them to immense knowledge breakthroughs and advances in science, technology, and other fields, when in reality, they are simply affirming the user’s ego and greatness and the solutions the user arrives at with the aid of the chatbot are not legitimate nor effectual. This break from reality has been roughly coined under the grassroots term “ChatGPT psychosis” or “GPT psychosis” and appears to have impacted major Silicon Valley figures as well.
Enterprise decision-makers looking to deploy or who have already deployed chatbot-based assistants in the workplace would do well to understand these trends and adopt system prompts and other tools discouraging AI chatbots from engaging in expressive human communication or emotion-laden language that could end up leading those who interact with AI-based products — whether they be employees or customers of the business – to fall victim to unhealthy attachments or GPT psychosis.
Sci-fi author J.M. Berger, in a post on BlueSky spotted by my former colleague at The Verge Adi Robertson, advised that chatbot providers encode three main behavioral principles in their system prompts or rules for AI chatbots to follow to avoid such emotional fixations from forming:
OpenAI’s challenge: making technical fixes and ensuring human safeguards
Days prior to the release of GPT-5, OpenAI announced new measures to promote “healthy use” of ChatGPT, including gentle prompts to take breaks during long sessions.
But the growing reports of “ChatGPT psychosis” and the emotional fixation of some users on specific chatbot models — as openly admitted to by Altman — underscore the difficulty of balancing engaging, personalized AI with safeguards that can detect and interrupt harmful spirals.
OpenAI must stabilize infrastructure, tune personalization, and decide how to moderate immersive interactions — all while fending off competition from Anthropic, Google, and a growing list of powerful open source models from China and other regions.
As Altman put it, society — and OpenAI — will need to “figure out how to make it a big net positive” if billions of people come to trust AI for their most important decisions.
Source link