Bill Gates’ 2-Year Prediction: Did GPT-5 Reach Its Peak Before Launch — Despite Sam Altman’s Promises of Improvements?
Microsoft’s co-founder was skeptical that GPT-5 would offer more than modest improvements, and his prediction seems accurate.

There had been a lot of hype and anticipation building around GPT-5 prior to its recent launch. OpenAI touted the tool as the smartest AI model while comparing it to an entire team of PhD-level experts. GPT-5 ships with a plethora of next-gen features across a wide range of categories, including coding, writing, and medicine.
The ChatGPT maker's CEO, Sam Altman, previously claimed that something "smarter than the smartest person you know" will soon be running on a device in your pocket, potentially referring to GPT-5. However, the AI firm has received backlash from users following the model's launch and its abrupt decision to deprecate the model's predecessors.
"They have ruined ChatGPT," lamented a user while citing the tool's degraded user experience rife with bugs, glitches, and unresponsiveness. Recently, Altman issued some updates regarding GPT-5's rollout, including doubling of GPT-5's rate limits for ChatGPT Plus users, continued access to GPT-4o for Plus users, and more transparency about which model is responding to a query.
GPT-5 rollout updates:*We are going to double GPT-5 rate limits for ChatGPT Plus users as we finish rollout.*We will let Plus users choose to continue to use 4o. We will watch usage as we think about how long to offer legacy models for.*GPT-5 will seem smarter starting…August 8, 2025
The executive attributed the model's dismal performance during its launch to GPT-5's autoswitcher being broken, making the model seem dumber. And while OpenAI claims that it has seemingly remedied the situation, Microsoft co-founder Bill Gates might have predicted GPT-5's current predicament 2 years ago, before it ever came into existence (via The Indian Express).
Sam Altman previously promised with "a high degree of scientific certainty" that GPT-5 will be smarter than GPT-4, which he admitted "kind of sucks" and is mildly embarrassing at best.
Speaking to the German business newspaper Handelsblatt in October, Bill Gates claimed that the GPT technology had reached a plateau despite the belief by most of OpenAI's team members, including Sam Altman, that GPT-5 would be significantly better than GPT-4.
While the philanthropic billionaire describes the leap from GPT-2 to GPT-4 as incredible, he was blatantly skeptical, questioning OpenAI's capability to replicate similar results with GPT-5.
All the latest news, reviews, and guides for Windows and Xbox diehards.
True to his words, multiple users have expressed their disappointment with GPT-5, describing it as barely an improvement over GPT-4. Gatesa predicted that development in the generative AI landscape had plateaued with GPT-4, suggesting that OpenAI had hit a ceiling with the development of its GPT technology.
However, he stated that with new research, AI could scale new heights, making it more reliable, bolstering healthcare advice via smartphones. He also added that AI requires an exorbitant amount of funding and computing power to run:
“Well, it’s pretty expensive to train a large language model. But the actual usage costs were once ten cents per query. Today it’s probably more like three cents. The costs for computing power or semiconductors remain enormous.”
Reports of OpenAI hitting a wall in AI development are not new
Last year, a report emerged claiming top AI labs, including OpenAI, Google, and Anthropic, were struggling to develop advanced AI models. The struggle was attributed to a lack of high-quality content for model training and the high cost of chasing the AI hype. It further revealed that the delayed launch of next-gen models is closely tied to these issues.
However, OpenAI CEO Sam Altman quickly set aside the claims, indicating that "there is no wall," suggesting that the AI firm hadn't reached a knowledge cap for training AI models. Former Google CEO Eric Schmidt seemingly echoed similar sentiments, indicating that there's no evidence scaling laws have begun to stop:
"In 5 years, you'll have two or three more turns of the crank in these large models. These large models are scaling with ability that is unprecedented. There's no evidence that the scaling laws have begun to stop. They will eventually stop but we're not there yet."
It'll be interesting to see if OpenAI's GPT-5 will live up to the hype and whether the company will address the highlighted issues by agrieved users. What are your thoughts on OpenAI's GPT technology plateauing? Let me know in the comments.

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.