Recent research has revealed three vulnerabilities in Google’s Gemini AI suite that could have allowed attackers to manipulate the assistant and extract user information silently.
This includes users' saved data and location history.
The issues affected Gemini Cloud Assist, Gemini Search Personalisation Model, and Gemini’s Browsing Tool.
Disclosing the vulnerabilities, Tenable Research stated, “Google has remediated the vulnerabilities, and no action is required from end users.”
The first vulnerability was identified in Gemini Cloud Assist, where poisoned log entries could be injected and later interpreted as legitimate prompts.
This opened a path for attackers to influence Gemini’s behaviour or attempt cloud resource access.
In the Gemini Search Personalisation Model, attackers could insert queries into a victim’s Chrome search history, which Gemini then treated as trusted input, potentially exposing saved data and location information.
The third issue involved the Gemini Browsing Tool, which could be tricked into sending hidden outbound requests embedding private information to attacker-controlled servers.
Together, these vulnerabilities open pathways for attackers to both infiltrate Gemini and extract sensitive data.
Infiltration and exfiltration vectors
Tenable’s research showed that routine Gemini features could serve as entry points for attackers, turning normal functionality into potential vulnerabilities.
“Gemini draws its strength from pulling context across logs, searches, and browsing. That same capability can become a liability if attackers poison those inputs,” said Tenable’s senior security researcher, Liv Matan.
Like any powerful technology, large language models (LLMs) such as Gemini remain susceptible to vulnerabilities, he added.
Infiltration could occur through indirect prompt injection, where attacker-controlled content is silently pulled into Gemini’s context.
Examples include log poisoning, where malicious entries are added to cloud logs, or manipulation of a user’s search history in Chrome.
Once a malicious prompt has been successfully injected, attackers could bypass Google’s existing defences and leverage the browsing tool to extract data silently.
While Google has implemented safeguards, including link redirection, markdown filtering, and truncating suspicious outputs, Tenable found that certain functionality-level blind spots remain.
Tool execution provides a pathway for attackers to embed sensitive information into outbound requests, allowing exfiltration without triggering visible warnings or alerts.
The findings highlight the importance of treating AI platforms as potential attack surfaces.
Recommendations for enterprises
Enterprises should treat AI-driven features as active attack surfaces rather than passive tools.
Security teams should regularly audit logs, search histories, and integrations for signs of manipulation or poisoning.
Monitoring for unusual outbound requests is important, as such activity could indicate attempts at data exfiltration.
Enterprises should further test the resilience of AI-enabled services against prompt injection and adopt a layered approach to defence.
The disclosure highlights that securing AI is not only about patching individual flaws but about anticipating new attack vectors where the AI itself can be exploited.