OpenAI is reportedly prioritizing shiny products over safety processes (again) — yet there's a 99.999999% probability AI will spell inevitable doom to humanity

OpenAI and Microsoft logos
(Image credit: Getty Images | NurPhoto)

Last year, after OpenAI CEO Sam Altman was ousted from the company for not being “consistently candid,” a handful of high-profile executives, including former Head of Alignment Jan Leike, left the firm.

The executive revealed that he'd disagreed with OpenAI's leadership over its safety strategy, further indicating that safety processes and culture had taken a back seat as the company prioritized shiny products like AGI.

“We had more thorough safety testing when [the technology] was less important,” indicated a person well-versed in the development and testing of OpenAI's yet-to-launch o3 model. The source further disclosed that as these AI models scale and become advanced, the threat to humanity increases, too.

“But because there is more demand for it, they want it out faster. I hope it is not a catastrophic misstep, but it is reckless. This is a recipe for disaster.”

This isn't OpenAI's first rodeo with safety processes

(Image credit: Getty Images | NurPhoto)

This isn't the first time OpenAI has been on the spot for rushing through safety processes. In 2024, a separate report suggested that OpenAI rushed through GPT-4o's launch, leaving the safety team with little time to test the model.

Perhaps more concerning, the company reportedly sent RSVP invites for the product launch celebration party before the safety team ran tests. "They planned the launch after-party before knowing if it was safe to launch," the source added. "We basically failed at the process."

In comparison, testers had up to six months to evaluate GPT-4 before it shipped. A person well-versed in the situation revealed that evaluation and safety tests unearthed dangerous capabilities two months into the testing phase.

According to the source:

“They are just not prioritising public safety at all."

To that end, multiple reports suggest that advances could lead to inevitable doom. AI safety researcher Roman Yampolskiy indicated that there's a 99.999999% probability AI will end humanity, according to p(doom).

However, OpenAI claims that it has improved the safety processes by automating some of the tests, which has allowed the company to reduce the time allocated for testing. Additionally, the ChatGPT maker indicated that the AI models have been tested and mitigated to avoid catastrophic risks.

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.