Google's DeepMind CEO lists 2 AGI existential risks to society keeping him up at night — but claims "today's AI systems" don't warrant a pause on development

Demis Hassabis​ speaks onstage during the Frontiers of AI with Demis Hassabis, DeepMind and Francine Laqua, Anchor and Editor-At-Large panel discussion on day one of SXSW London 2025 at Truman Brewery on June 02, 2025 in London, England.
DeepMind CEO Demis Hassabis​. (Image credit: Getty Images | Jack Taylor / Stringer)

We're living in an unprecedented world with the emergence of the generative AI boom, revolutionizing every aspect of our lives, including work, medicine, computing, education, and entertainment. Every major tech company is seemingly chasing the AI hype, pushing new AI models to the world ever so often as they strive to hit the coveted AGI benchmark.

In a broad interview with WIRED's Steven Levy, Google's DeepMind CEO Demis Hassabis shared some interesting insights and predictions about where the world is headed as we venture into the AI era.

As you might have expected, AGI (Artificial General Intelligence) was a hot topic in this interview. For context, it refers to a sophisticated AI system with cognitive capabilities that are reminiscent of humans, but better.

Hassabis helped co-found DeepMind alongside Mustafa Suleyman (Microsoft's AI CEO) before it was acquired by Google in 2014 with the aim of creating AGI within 20 years. As it turns out, we're edging closer to this timeline, and DeepMind's CEO says the company is "dead on track" to achieve the coveted benchmark in the next 5-10 years.

Interestingly, the executive echoed similar sentiments in a separate interview early last month, claiming "AGI is coming and I'm not sure society is ready." He admitted that he was worried that society wasn't well-equipped or even prepared to handle all that it entails, and that the prospects keep him up at night.

According to DeepMind's CEO:

"There are at least two risks that I worry a lot about. One is bad actors, whether individuals or rogue nations, repurposing AGI for harmful ends. The second one is the technical risk of AI itself. As AI gets more powerful and agentic, can we make sure the guardrails around it are safe and can't be circumvented."

Interestingly, OpenAI CEO Sam Altman shared a different account, citing that the existential woes highlighted won't be experienced at the AGI moment, rather it will whoosh by with surprisingly little societal impact within the next 5 years.

Of course, regulation could play a major role in mitigating some of the issues and ensuring that the ever-evolving AI models don't veer off the guardrails and spiral out of control. Hassabis reiterates the same sentiments about regulation:

"It must be nimble, as the knowledge about the research becomes better and better. It also needs to be international. That’s the bigger problem."

But would he hit pause if the challenge presented itself today? The executive claims, "I don't think today's systems are posing any sort of existential risk." Ethereum's co-founder, Vitalik Buterin, recommended a soft pause to establish control over the rapid advancement of AI and potential catastrophic harm.

Will AI take your job? It's complicated

Hassabis says that AGI will help boost human productivity. (Image credit: Getty Images | WPA Pool)

Most people these days are often worried about losing their jobs to AI. While there's (currently) no concrete data proving this theory, multiple reports have surfaced indicating that 54% of banking jobs could be automated using AI.

Even Microsoft's co-founder, Bill Gates, claimed AI will replace humans for most things, with the exception of fields like biology, energy, and coding. The philanthropic billionaire indicated that this is because these fields are too complex and require human intervention.

But according to Hassabis, the AI revolution will create new jobs that leverage AI-powered tools. The executive claimed that AI will supercharge our productivity at work, potentially making us "superhuman."

He also argued that certain professions will always be reserved for humans."There's a lot of things that we won't want to do with a machine," he added. Hassabis indicated that patients don't want to be attended to or treated by a robot nurse, since it'll be difficult to show empathy.

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.