Microsoft AI CEO Mustafa Suleyman raises the alarm about the dangers of "conscious AI" — a prospect that’s seemingly keeping Google DeepMind’s CEO up at night

Mustafa Suleyman, chief executive officer of Microsoft AI, during an interview on "The Circuit with Emily Chang" at the Microsoft campus in Redmond, Washington, US, on Wednesday, March 19, 2025.
(Image credit: Getty Images | Bloomberg)

What's the end game for the tech corporations plunging billions of dollars into the generative AI landscape? The easiest answer would be AGI (artificial general intelligence), but it has seemingly turned into a buzzword with a different meaning each time it's mentioned.

In simple terms, it refers to a powerful AI-powered system that surpasses human cognitive capabilities. However, Microsoft's multibillion-dollar OpenAI partnership agreement defines AGI as a powerful AI system with the capability of generating up to $100 billion in profit.

This threshold ties the ChatGPT maker to Microsoft by the hip amid immense pressure from investors to evolve into a for-profit entity or risk losing funding, while attracting outsider interference and hostile takeovers.

But as it seems, the next-gen technology is advancing and scaling at an alarming rate, potentially rendering a PhD obsolete even before you graduate. More concerning, Microsoft AI CEO Mustafa Suleyman recently published a blog post called "We must build AI for people; not to be a person," further suggesting that conscious AI might be coming.

According to Suleyman:

"It shares certain aspects of the idea of a “philosophical zombie” (a technical term!), one that simulates all the characteristics of consciousness, but internally it is blank. My imagined AI system would not actually be conscious, but it would imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness."

I’m growing more and more concerned about what is becoming known as the “psychosis risk”. and a bunch of related issues. I don’t think this will be limited to those who are already at risk of mental health issues. Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.

Microsoft AI CEO, Mustafa Suleyman

Suleyman says his focus and mission are to create safe and beneficial AI as we forge toward superintelligence, designed to make the world a better place through products like Microsoft Copilot, which allow people to achieve incredible feats beyond their imaginations.

The executive further highlighted his mission through Microsoft is to create AI that makes us more human while simultaneously deepening our trust and understanding of one another. This builds on his mission to transform Copilot into a companion and a real friend.

According to Suleyman:

"This involves a lot of careful design choices to ensure it truly delivers an incredible experience. We won’t always get it right, but this humanist frame provides us with a clear north star to keep working toward."

Microsoft's AI CEO reiterates the importance of building AI for people, and not to transform the technology into a digital person. While the executive has been championing the AI companions campaign, he still insists on the importance of guardrails to help protect people while ensuring the technology delivers value.

Perhaps more concerning, Suleyman says the development of conscious AI isn't a far-fetched theory, as it can be developed using today's technologies coupled with some that are expected to mature within the next 2-3 years. What's more, the feat won't require "expensive bespoke" training. Instead, the executive says that conscious AI can be achieved with large model API access, natural language prompting, basic tool use, and regular code.

The prospects of AI keep DeepMind CEO up at night

Microsoft's AI CEO Mustafa Suleyman says conscious AI is coming and society isn't ready, highlighting the importance of guardrails to prevent the technology from spiraling out of control. (Image credit: Getty Images | Kirill Kudryavtsev)

Over the past few years, further progression in AI-themed advances has been predicted as an inevitable doom to humanity. AI safety researcher Roman Yampolskiy claims there's a 99.999999% probability that AI will end humanity. He further indicated that the only way to avoid this outcome is by not building AI in the first place.

However, OpenAI CEO Sam Altman has been rather optimistic about AI's impact on society despite claims that the company prioritizes shiny products like AGI over safety processes, indicating that the AI firm will hit the AGI benchmark within the next five years. Interestingly, he claimed that the milestone would whoosh by with surprisingly little impact on society.

Perhaps more concerning, Anthropic CEO Dario Amodei recently admitted that the company doesn't understand how its models work. This news comes after Google's DeepMind CEO Demis Hassabis indicated that AGI is coming, further raising concern that society might not be ready for all it entails. The executive indicated that the prospects keep him up at night.

Mustafa Suleyman says conscious AI will have a language to express itself in fluently, a memory, a sense of self, intrinsic motivation, goal-oriented, and more. The executive says the phenomenon won't emerge by accident. Instead, he foresees an engineer creating conscious AI by combining the listed capabilities above.

Suleyman says society isn't ready for conscious AI, creating a need for guardrails to prevent the phenomenon from coming to life. "Just as we should produce AI that prioritizes engagement with humans and real-world interactions in our physical and human world, we should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness," added Microsoft's AI CEO.

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.