A rather disturbing vulnerability called EchoLeak was recently discovered. What makes it so worrying is that it is the first known artificial intelligence flaw that does not require any clicks or user action to leak confidential information. Yes, you read that right: by simply querying Microsoft 365 Copilot, an attacker could extract private data from another person's work environment, without them knowing or doing anything.
This vulnerability was reported to Microsoft in early 2025. The company identified it as CVE-2025-32711, classified it as critical, and fortunately fixed it on its end in May. That means you don't need to install anything or make any manual changes: the fix is already in place. Furthermore, according to Microsoft, there is no evidence that anyone has exploited it in the real world, so no customers have been affected.
In case you're not familiar with it, Microsoft 365 Copilot is the artificial intelligence assistant that lives inside the apps you already know, such as Word, Excel, Outlook, and Teams. It uses advanced language models (the same ones behind tools like ChatGPT) together with Microsoft Graph to help you write texts, summarize emails, analyze data, and answer questions about the information you have in your work environment: files, chats, emails, etc.
Although the flaw has already been fixed and was never used maliciously, EchoLeak is a wake-up call. It highlights a new class of vulnerabilities known as language model scope violations (or LLM scope violations). In simple terms, these are flaws that allow AI to share sensitive information that was meant to remain private, without the user intending to do so... or even knowing it's happening.
And what's worse: since this type of attack does not require human interaction, it could easily be automated to silently steal data within companies, leaving many systems exposed without anyone noticing. It is further proof that integrating AI into our work tools brings enormous advantages, but also new challenges that we cannot afford to ignore.
Read more: Microsoft 365 Copilot: New Design and New Features
How did EchoLeak work?
It all started with an email that, at first glance, seemed completely harmless. Nothing out of the ordinary: well written, looking like a work document or internal communication, like the ones anyone receives on a typical day.
But that message had a hidden trick. It had a hidden instruction designed specifically to fool Copilot's artificial intelligence. This technique, known as prompt injection, was so well disguised that it managed to bypass Microsoft's security protections, such as the XPIA classifier, which normally detects such attempts.
The email was written as if it were for a real person, not as a command for an automated system, and that was precisely what allowed it to go unnoticed.
Then, when the person asked Copilot a work-related question (something common, such as requesting a sales summary or reviewing a report), the information retrieval system (RAG) searched for relevant content and retrieved that malicious email because of its apparent relevance to the request.
That's where the dangerous part came in: the hidden injection reached the language model, tricked it, and made it extract confidential data. That data was inserted into a URL or a specially formatted image.
The most serious part was that, when displaying the response, the browser attempted to load that image or link, and without anyone noticing, the information was sent to the attacker's server. Everything happened without the user having to click on anything. A rather ingenious attack... but also alarming.
Overview of the attack chain (Source: Aim Labs)
Microsoft has certain protections in place to prevent data from being shared with external sites, but since Teams and SharePoint links are considered trustworthy by default, attackers can exploit this to extract information without raising suspicion.
The EchoLeak attack in action
Read more: June 2025 Patch Tuesday: Microsoft Fixes 66 Vulnerabilities
Conclusion
Although EchoLeak has already been fixed, the underlying problem remains: AI tools are becoming increasingly integrated into everyday business tasks, and that complexity is overwhelming many of the traditional defenses that used to work.
And what is most concerning is that this trend is not going to stop. On the contrary, it is very likely that we will see new similar vulnerabilities that attackers could use silently to cause serious damage.
That is why it is essential for companies to start strengthening their systems: improving filters that detect attempts to manipulate prompts, defining more precisely what information can enter or leave the models, and reviewing the responses given by AI before they reach the user, especially if they include links or sensitive data.
It is also a good idea to adjust the engines that retrieve information (RAG) so that they do not take content from emails or external sources, thus preventing malicious messages from entering the process from the outset. All this may seem technical, but these are essential steps for using AI safely on a daily basis.