Anthropic's CEO says "we do not understand how our own AI creations work" — and yes, we should all be "alarmed" by that
Dario Amodei admits that Anthropic doesn't understand exactly how its models work, raising alarms about its threats to society.

AI safety researcher Roman Yampolskiy indicated a 99.999999% probability that AI will end humanity, indicating that the only way to avoid this outcome is by not building and developing AI in the first place.
Interestingly, as generative AI becomes more advanced and scales to greater heights, Anthropic's CEO, Dario Amodei, admitted that the company doesn't have a precise working knowledge of its artificial intelligence.
In an essay recently published on the executive's website, Amodei indicated:
“People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology.”
He describes how this poses a significant threat that could lead to harmful outcomes. To prevent any of those negative scenarios, Amodei recommends that AI labs lean more toward interpretability before AI advances to a level that might be impossible for humans to control.
These systems will be absolutely central to the economy, technology, and national security, and will be capable of so much autonomy that I consider it basically unacceptable for humanity to be totally ignorant of how they work.
Anthropic CEO, Dario Amodei
However, Anthropic's CEO admits that a lot of work is needed to unlock the level of interpretability required to establish control of the ever-evolving AI.
A recent report suggested that OpenAI is cutting corners with its AI advances and development, slashing a significant amount of time allocated for safety tests. The report further detailed how the ChatGPT maker uses this dangerous technique to maintain a lead against its rivals.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
It isn't the first time the ChatGPT maker has been put on the spot over safety concerns. A huge chunk of the company's founding team departed, citing safety issues, with some indicating that "shiny" products like AGI had gained precedence over safety culture and processes.
Every major tech corporation is seemingly racing to get a share of the generative AI bubble, and even Microsoft is committing $80 billion in AI advances and investments. It's apparent that investors are concerned about the technology, especially due to its high capital demand with no clear path to profitability.
Over the past few months, people have raised privacy and security concerns as the tool becomes more prevalent and gains broad adoption, but perhaps more importantly, its long-term impact on humanity.
Tech executives, including Microsoft co-founder Bill Gates, predict that AI will replace humans for most things, but what stands out the most (at least for me) is the technology's threat to humanity.

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.