Google DeepMind CEO says "AGI is coming and I'm not sure society is ready" as the prospects keep him up at night
Demis Hassabis says AGI could be achieved within the next decade, but is worried society isn't ready to handle all it entails.

Over the past few months, every major AI lab has been floating AGI (Artificial General Intelligence) as an achievable feat within a decade. As generative AI advances and scales greater heights, the feat is starting to look more than a buzzword.
OpenAI CEO Sam Altman indicated that he is confident that his team is well-equipped to build AGI, suggesting the ChatGPT maker is shifting its focus to superintelligence.
Altman says that OpenAI could hit the AGI benchmark within the next five years. Interestingly, he claimed that the milestone would whoosh by with surprisingly little impact on society.
This is despite early earnings from reputable experts in the field, including AI safety researcher Roman Yampolskiy, who indicated a 99.999999% probability that AI will end humanity. Yampolskiy claimed that the only way to avoid this outcome is by not building and developing AI in the first place.
In a recent interview with Time, Google DeepMind CEO Demis Hassabis highlighted alarming concerns about the rapid progression and advances in AI. Hassabis indicated that we're on the precipice of achieving the AGI benchmark within the next five to ten years.
Hassabis indicated that the issue keeps him up at night, coming at a critical time when investors are injecting large sums of money into the landscape despite the fact that it's still very raw and impossible to establish a clear path to profitability.
It's a sort of like probability distribution. But it's coming, either way it's coming very soon and I'm not sure society's quite ready for that yet. And we need to think that through and also think about these issues that I talked about earlier, to do with the controllability of these systems and also the access to these systems and ensuring that all goes well.
Google DeepMind CEO, Demis Hassabis
Anthropic CEO Dario Amodei recently admitted that the company doesn't understand how its models work, raising concerns among its users. As you may know, AGI represents an AI system that is smarter than humans and surpasses their cognitive capabilities.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
As such, it's important to have elaborate measures to ensure that humans remain in control of these systems at all times to prevent potential existential doom to humanity.
According to a separate report, a former OpenAI researcher claims the ChatGPT maker could be on the precipice of achieving AGI, but it's not prepared “to handle all that entails" as shiny products get precedence over safety.

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.