Altman predicts AGI will reshape society before we’re ready — and that’s okay? Scary moments, sudden shifts, and late-stage adaptation await.
OpenAI's CEO says he expects some pretty bad stuff to happen because of AI advances.
As generative AI becomes more advanced and helps scale greater heights across a wide range of fields, including medicine, computing, entertainment, and more, it's increasingly becoming difficult to tell when/if top AI labs like OpenAI, Anthropic, and Google will unlock AGI (artificial general intelligence).
This is especially true after a report suggested that these labs may not be able to develop advanced AI models after hitting a scaling wall due to a lack of high-quality content for training. As you may know, Microsoft's multibillion-dollar partnership with OpenAI is in the crosshairs over the latter's for-profit evolution plans.
Consequently, the tech giant wiggled out of two mega data center deals because it no longer wanted to provide additional computing support for ChatGPT training. However, OpenAI CEO Sam Altman indicated that the AI firm was no longer compute-constrained.
The multibillion-dollar partnership includes a stringent AGI clause, which indicates that both parties will have to sever ties after they achieve the coveted benchmark. A separate report defined AGI as a powerful AI system with the capability of generating up to $100 billion in profit.
Over the past few years, OpenAI CEO Sam Altman has shared some interesting insights about what an AI-driven world could look like in the future after the company achieves AGI. While multitudes of users and regulators have raised privacy and security concerns revolving around the development of AI, the executive says these concerns won't be experienced upon the arrival of the AGI moment.
OpenAI CEO says he expects AGI to cause scary stuff
Instead, he claims the benchmark will be achieved within the next five years, but it will whoosh by with surprisingly little societal impact. During a recent interview with a16z, Sam Altman indicated that "AGI will come, it will go whooshing by." The executive further indicated that the world won't change as much as expected (via artificial intelligenceee on IG). "It won't actually be the singularity."
According to the executive:
All the latest news, reviews, and guides for Windows and Xbox diehards.
"Even if it is like doing kind of crazy AI research. Like society will learn faster, but one of the kind of retrospective observations is people and societies are just so much adaptable than we think. It will be more contionous than we thought."
A post shared by Artificial Intelligence (AI) (@artificialintelligenceee)
A photo posted by on
The fact that the technology has not produced a really scary giant risk doesn't mean it never will. It's kind of weird having billions of people talking to the same brain. There may be these weird societal scale things that are already happening, which aren't scary in a big way but are just sort of different. I expect some bad stuff to happen because of this technology, which has also happened with previous technologies and funding.
OpenAI CEO, Sam Altman.
However, the executive is optimistic that the company, alongside society, will develop guardrails to prevent the technology from spiraling out of control. This news comes after OpenAI implemented parental controls on ChatGPT amid the increasing number of suicide incidents among the youth.
FAQ
What is AGI?
AGI, or artificial general intelligence, refers to AI systems capable of performing any intellectual task a human can — not just narrow, specialized functions.
Why are Altman’s comments controversial?
Critics argue that adapting “in hindsight” is risky when dealing with technologies that could reshape economies, governments, and human rights. Waiting to respond could lead to irreversible consequences.
Is Altman optimistic or cautious about AGI?
Both. He acknowledges the potential for disruption and fear but remains confident that society will ultimately adjust and benefit — even if the transition is messy.
What does this mean for policymakers and the public?
It suggests a need for proactive regulation, ethical frameworks, and public awareness — even if the full impact of AGI isn’t yet clear. Preparing now could reduce the “scary moments” Altman anticipates.
Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
