Microsoft Copilot vulnerability allowed attackers to quietly steal your personal data with a single click — this is the Copilot "Reprompt" exploit

A phone displaying Copilot in front of a screen that say Microsoft.
(Image credit: Cheng Xin | Getty Images)

Data security research firm Varonis Threat Labs has published a report that details an exploit it calls "Reprompt" that allowed attackers to silently steal your personal data via Microsoft Copilot.

Reprompt "gives threat actors an invisible entry point to perform a data‑exfiltration chain that bypasses enterprise security controls entirely and accesses sensitive data without detection — all from one click," Varonis Threat Labs says.

Additionally, Varonis Threat Labs says this exploit is different from other AI-driven security exploits, such as EchoLeak, as this exploit required just a single click from the user, with no further input required. It could even be exploited when Copilot was closed.

A step by step breakdown explaining how the Reprompt attack works. (Image credit: Varonis Threat Labs)

Q parameters allow "AI-related platforms to transmit a user's query or prompt via the URL," explains Varonis Threat Labs. "By including a specific question or instruction in the q parameter, developers and users can automatically populate the input field when the page loads, causing the AI system to execute the prompt immediately."

So in this case, an attacker could issue a q parameter that asked Copilot to send data to an attacker's server. Out of the box, Copilot was designed to refuse to fetch URLs like this, but Varonis Threat Labs was able to engineer the prompt in such a way that bypassed Copilot's safeguards and convinced the AI to fetch the URL to send data to.

According to Varonis Threat Labs, the exploit was reported to Microsoft in August 2025 and has been patched as of January 13, 2026. That means the exploit is now fixed, and there's no longer any risk of this impacting users.

AI assistants are not bulletproof, and this is unlikely to be Copilot's last security vulnerability to be discovered by security researchers. Always be wary of the kind of information you share with AI assistants about yourself, and more importantly, always be vigilant when it comes to clicking on links, especially ones that link into your AI assistant of choice.

A banner that reads "It's Poll Time" and shows a graphic with a dial on it pointing to a mid-range hue on a gradient.


Click to follow Windows Central on Google News

Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!


Zac Bowden
Senior Editor

Zac Bowden is a Senior Editor at Windows Central and has been with the site since 2016. Bringing you exclusive coverage into the world of Windows, Surface, and hardware. He's also an avid collector of rare Microsoft prototype devices! Keep in touch on Twitter and Threads

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.