Stay updated with the latest Cybersecurity News on our TecnetBlog.

Vulnerabilities in Google Gemini Allow User Data to Be Leaked

Written by Alexander Chapellin | Oct 1, 2025 4:00:00 PM

Three new vulnerabilities in Google's Gemini AI put sensitive information at risk, including stored data and users’ location.

Known as the “Gemini Trifecta,” these flaws highlight that AI-powered assistants can not only be targets of attack, but also used as a means to carry out malicious actions.

The discovery revealed serious privacy risks across different parts of the Gemini ecosystem. Although Google has already addressed the issues with updates, the incident serves as a reminder of the security challenges faced by increasingly personalized and complex AI platforms.

Each vulnerability affected a different function within Gemini, demonstrating that even the most advanced systems can have multiple weak points if they are not designed with a solid security architecture from the outset.

 

 

“Gemini Trifecta”: The Cloud Vulnerabilities That Shook AI

 

One of the most concerning vulnerabilities found in the Gemini ecosystem directly affected its cloud assistant. In this case, attackers could have exploited a prompt injection in Google Cloud tools, opening the door to actions such as compromising cloud resources or launching phishing attacks.

What’s most unsettling is how this attack worked: system-generated logs (which Gemini is capable of automatically summarizing) could be “poisoned” with hidden malicious instructions. In other words, if someone manipulated those logs with disguised commands, Gemini would interpret them as legitimate instructions.

This finding reveals a new class of threat to artificial intelligence: log injections—a technique that has been little explored until now, allowing attackers to indirectly manipulate the inputs processed by AI models.

 

 

Another critical vulnerability detected in Gemini directly affected its search personalization model. In this case, attackers could manipulate the behavior of the AI assistant by injecting malicious queries into the user's Chrome search history.

This allowed them to trick Gemini into interpreting those searches as legitimate, potentially leading to the leakage of stored information, including personal data and the user's location.

The flaw highlights how sensitive the use of personalized data can be in smart assistants. When manipulated inputs are combined with automated functions, even browsing history can become a gateway for targeted attacks.

Gemini’s browsing tool, designed to enhance the user experience when interacting with the web, also turned out to be a weak point. A critical vulnerability in this feature allowed attackers to directly extract confidential information stored by the user.

The attack followed a simple yet effective methodology: first, a malicious message disguised as harmless content was injected. Then, when Gemini processed that message as if it were a valid command, the system could be manipulated to send private data to an external server controlled by the attacker.

 

 

This infiltration and exfiltration process left the door open for all kinds of sensitive information to be leaked—from browsing histories to personal data—without the user ever noticing.

Once again, this underscores the importance of securing every component of an AI platform, especially when it has access to such valuable information.

 

Read more: EvilAI: The Malware Disguised as an AI Tool

 

How Attackers Managed to Leak Data Using Gemini Without Being Detected

 

Attackers found rather stealthy ways to carry out what is known as indirect injection in Gemini. One technique involved embedding malicious instructions within the User-Agent header in logs, or even using JavaScript to insert hidden queries into a victim’s browser history without their knowledge.

But injecting commands was only the first step. The real challenge was extracting the data while bypassing Google’s security filters that block links, images, and other sensitive outputs.

To achieve this, the attackers exploited Gemini’s own browsing tool, using it as a side channel. They crafted messages that instructed the model to visit a specific URL—but the clever part of the attack was that the URL itself contained private user information, embedded directly in the request. That request was then sent to a server controlled by the attacker, resulting in a silent data exfiltration.

This type of leak didn’t occur through the model’s direct responses, but rather through internal tools, which helped evade many of Google’s built-in defenses.

Fortunately, Google has already fixed all three vulnerabilities. The implemented fixes include:

 

  1. Preventing hyperlinks from appearing in summaries generated from logs.

  2. Reverting the search personalization model that was vulnerable to injections.

  3. Blocking potential data leaks through the browsing tool in attacks involving indirect requests.

 

These findings are a reminder that while AI is evolving rapidly, security must evolve even faster.