ChatGPT downplays AI's threat to humanity despite an apparent "99.999999% probability" of inevitable doom
OpenAI's chatbot doesn't think AI is a major threat to humanity, and we should focus more on a climate-induced collapse.

With the emergence of generative AI and its broad adoption, job security, privacy, and security are among the major concerns among people right now. However, perhaps more concerning is the technology's existential threat to humanity.
Last year, AI safety researcher Roman Yampolskiy claimed that there's a 99.999999% probability that AI will end humanity and that the only way to reverse this outcome is not to build AI in the first place. However, OpenAI CEO Sam Altman claims that as AI advances and scales greater heights, the technology will be smart enough to prevent itself from causing existential doom.
More recently, a Reddit user shared an interesting graph generated by ChatGPT, which highlighted potential causes that might contribute to the collapse of human civilization by 2150.
It's worth noting, as usual, that AI-generated responses to queries aren't the gospel truth and are heavily reliant on the user's prompt engineering skills and lift most of their information from the Internet.
A separate report flagged a critical issue, highlighting several instances where Copilot struggled to distinguish facts from opinions. As such, it's important to take deductions from AI with a grain of salt.
ChatGPT predicts the end of the world from r/ChatGPT
The graph lists several causes that might contribute to the collapse of human civilization by 2150, including nuclear war, asteroid impact, climate-induced societal collapse, engineered pandemics, and artificial general intelligence (AGI) misalignment.
Contrary to popular opinion, the AI-generated graph doesn't think AI has a high propensity to end humanity. Instead, it lists climate-induced societal collapse as the main cause of the end of human civilization by 2150.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
According to a Reddit user sharing sentiments about the AI-generated graph:
"Every time AI is asked a question it will throw out an answer as if it’s fact unless it’s heavily prompted to use sources and even then it will sometimes throw in something else. Just a word’s weight of difference in a prompt can entirely change the outcome of the result. As an example I asked the same question below but the percentages are completely different, this one is maxing out at about 15% for climate change whereas OP’s is 37%, nuclear war 10% rather than 22%. Mine also picks up on completely different potential scenarios and rates AI misalignment higher than nuclear war.
It’s not formulating anything it’s spitting out an educated guess with figures plucked from varying sources based on its prompting, sometimes it’s not even sourcing things correctly. Essentially AI will spit out an answer even if it’s wrong so especially for stuff like this it’s a horoscope, it looks and sounds believable but it could well be completely incorrect. LLMs are not trained to model or simulate and when asked speculative questions, their answers can be as heavily weighted by prompting as much as it’s sources."
For good measure, I asked Microsoft Copilot, "What will be the main cause of the end of human civilization by 2150?" It furnished me with the following response:
"Predicting the exact cause of civilization's downfall is tricky, but experts highlight several major threats. Climate change is a top concern—unchecked global warming could trigger extreme weather, food shortages, and geopolitical instability. Other possibilities include nuclear war, pandemics, AI risks, and resource depletion.While human extinction is unlikely, civilization could face severe disruptions if these challenges aren't addressed. What do you think is the biggest threat?"
Last year, a report emerged claiming that the top complaint by users to Microsoft's AI division was that Copilot isn't as good as ChatGPT. Interestingly, Microsoft quickly dismissed the claims, shifting the blame to poor prompt engineering skills. "You're just not using it as intended," added Microsoft.
The software giant has since launched Copilot Academy, a program designed to equip users with advanced AI prompt engineering skills, which will allow them to make the most out of tools like Copilot.
This news comes after Anthropic CEO Dario Amodei admitted his company doesn't know how their own AI models work, raising critical security and safety concerns among users. Similarly, OpenAI CEO Sam Altman previously indicated that there's no big red button to stop the progression of AI.

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.