Sam Altman says OpenAI is no longer "compute-constrained" — after Microsoft lost its exclusive cloud provider status
OpenAI CEO claims the company is no longer struggling with computing power for its flagship models.

Sam Altman recently indicated that OpenAI is no longer compute-constrained, making it easier for the company to develop sophisticated and advanced AI models without any limitations and restrictions.
As part of its multi-billion dollar partnership with Microsoft, OpenAI had exclusive access to the tech giant's vast computing power. However, multiple reports suggested that Microsoft didn't meet the computing threshold required by OpenAI to facilitate its computing needs.
The AI lab expressed fears of other rival firms hitting the coveted AGI benchmark before it did, further indicating that it would be Microsoft's fault. Consequently, OpenAI managed to find some wiggle room in its agreement with Microsoft, following its $500 billion Stargate project announcement to build data centers across the United States to facilitate its AI efforts.
Microsoft ended up losing its exclusive cloud provider and largest investor status as OpenAI seemingly tightened it partnership with SoftBank, which recently led its latest round of funding, raising $40 billion from key investors, pushing its market cap to $300 billion.
It'd only take 5 to 10 people to build GPT-4 from scratch
As you may know, OpenAI is getting ready to discontinue GPT-4 and replace it with GPT-4o in ChatGPT. However, the company indicated that users can continue accessing the model via its API.
OpenAI CEO Sam Altman has openly expressed his disappointment with the model, citing that it "kind of sucks" and is mildly embarrassing at best. He further indicated that GPT-4 would be the dumbest model that users would have to interact with ever, potentially suggesting an upward trajectory in terms of capabilities for OpenAI's flagship models.
Perhaps more interestingly, while speaking to engineers behind the development of GPT-4.5, the executive asked them what it would take to build and develop GPT-4 again.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
For context, the CEO revealed that when GPT-4 was being developed, it took "hundreds of people, almost all of OpenAI's effort." Interestingly, things have seemingly gotten easier as generative AI has scaled and advanced to greater feats.
Interestingly, Alex Paino, lead for GPT-4.5 pretraining machine learning, indicated that the GPT-4's development process would probably only need 5 to ten people (via Business Insider).
According to the engineer:
"We trained GPT-4o, which was a GPT-4-caliber model that we retrained using a lot of the same stuff coming out of the GPT-4.5 research program. Doing that run itself actually took a much smaller number of people."
OpenAI researcher Daniel Selsam seemingly reiterated Paino's sentiments, indicating that building GPT-4 from scratch would be much easier now. "Just finding out someone else did something — it becomes immensely easier," the researcher added. "I feel like just the fact that something is possible is a huge cheat code."

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.