OpenAI scientists wanted "a doomsday bunker" before AGI surpasses human intelligence and threatens humanity

Sam Altman (C), US entrepreneur, investor, programmer, and founder and CEO of artificial intelligence company OpenAI, and the company's co-founder and chief scientist Ilya Sutskever (R), speak together at Tel Aviv University in Tel Aviv on June 5, 2023.
Sam Altman, founder and CEO of OpenAI, and the company's co-founder and chief scientist Ilya Sutskever. (Image credit: Jack Guez | AFP via Getty Images)

Multiple users have expressed their reluctance to hop onto the AI bandwagon, keeping it at arm's length because of privacy, security, and existential concerns. According to p(doom), AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy warned that there's a 99.999999% probability AI will end humanity.

And if recent reports are anything to go by, most AI labs are potentially on the precipice of hitting the coveted AGI (artificial general intelligence) benchmark. More specifically, OpenAI and Anthropic predict that AGI could be achieved within this decade.

“We’re definitely going to build a bunker before we release AGI.”

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.