OpenAI scientists wanted "a doomsday bunker" before AGI surpasses human intelligence and threatens humanity

Sam Altman (C), US entrepreneur, investor, programmer, and founder and CEO of artificial intelligence company OpenAI, and the company's co-founder and chief scientist Ilya Sutskever (R), speak together at Tel Aviv University in Tel Aviv on June 5, 2023.
Sam Altman, founder and CEO of OpenAI, and the company's co-founder and chief scientist Ilya Sutskever. (Image credit: Jack Guez | AFP via Getty Images)

Multiple users have expressed their reluctance to hop onto the AI bandwagon, keeping it at arm's length because of privacy, security, and existential concerns. According to p(doom), AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy warned that there's a 99.999999% probability AI will end humanity.

And if recent reports are anything to go by, most AI labs are potentially on the precipice of hitting the coveted AGI (artificial general intelligence) benchmark. More specifically, OpenAI and Anthropic predict that AGI could be achieved within this decade.

Despite the potential threat AGI could pose to humanity, OpenAI CEO Sam Altman claims that the threat won't manifest during the AGI moment. Instead, it will simply whoosh by with surprisingly little societal impact.

However, former OpenAI chief scientist Ilya Sutskever expressed concerns about AI surpassing human cognitive capabilities and becoming smarter.

As a workaround, the executive recommended building "a doomsday bunker," where researchers working at the firm would seek cover in case of an unprecedented rapture following the release of AGI (via The Atlantic).

During a meeting among key scientists at OpenAI in the summer of 2023, Sutskever indicated:

“We’re definitely going to build a bunker before we release AGI.”

Sutskever's comment about the bunker was first cited in Karen Hao's upcoming book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Interestingly, this wasn't the only time the AI scientist referenced the safety bunker.

The executive often talked about the bunker during OpenAI's internal discussions and meetings. According to a researcher, multiple people shared Sutskever's fears about AGI and its potential to rapture humanity.

While Safe Superintelligence Inc. founder Ilya Sutskever declined to comment on the matter, it raises great concern, especially since he was intimately involved in ChatGPT's development and other flagship AI-powered products. Are we truly ready for a world with AI systems that are powerful and smarter than humans?

This news comes after DeepMind CEO Demis Hassabis indicated that Google could be on the verge of achieving AGI following the release of new updates to its Gemini models. He raised concerns, citing that society isn't ready and that the prospect of "Artificial General Intelligence" is keeping him awake at night.

Elsewhere, Anthropic CEO Dario Amodei admitted that the company doesn't know how its models work. The executive further indicated that society should be concerned about the lack of understanding and its potential threats.

CATEGORIES
Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.