Microsoft Copilot's own default configuration exposed users to the first-ever "zero-click" AI attack, but there was no data breach

In this photo illustration, Microsoft Copilot AI logo is seen on a smartphone screen.
(Image credit: Getty Images | SOPA)

Security researchers from Aim Labs uncovered a critical attack dubbed 'EchoLeak' impacting Microsoft 365 Copilot. The vulnerability could potentially allow bad actors to gain unauthorized access to sensitive data from Microsoft 365 Copilot users without any interaction.

The security researchers presented their findings to Microsoft, prompting the tech giant to assign the vulnerability the identifier CVE-2025-32711. EchoLeak marks the first known zero-click attack on an AI agent (via Fortune).

The cybersecurity firm presented its findings to Microsoft earlier this year in January. The tech giant rated the vulnerability as critical, but it has since fixed the issue on sever-side in May.

Additionally, the tech giant indicated that no user action is required in the resolution of this issue, further indicating that there's no evidence of any real-world exploitation from bad actors.

This vulnerability represents a significant breakthrough in AI security research because it demonstrates how attackers can automatically exfiltrate the most sensitive information from Microsoft 365 Copilot’s context without requiring any user interaction whatsoever.

Aim Security co-founder and CTO, Adir Gruss

According to the researchers, the vulnerability constituted "an LLM scope violation," which allows bad actors can leverage an AI model to access sensitive data, including chat histories, OneDrive documents, Sharepoint content, Teams conversations, and more.

Perhaps more concerning, Gruss indicated that Microsoft Copilot's default configuration made most organizations more susceptible to malicious attacks before the tech giant fixed the issue. However, the executive indicated that evidence gathered suggested that no customers were impacted by the vulnerability.

According to a Microsoft spokesman on the matter:

“We appreciate Aim Labs for identifying and responsibly reporting this issue so it could be addressed before our customers were impacted.”

To that end, Microsoft says it has updated its products to mitigate the issue. It has also integrated elaborate defense mechanisms to bolster Microsoft 365 Copilot's security.

It will be interestingly to see how Microsoft combats security threats threatening its AI tools, especially after former Microsoft security architect Michael Bargury demonstrated 15 different ways to breach Copilot's security guardrails.

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.