Google is scrambling to contain “GeminiJack,” a zero-click flaw that lets hidden instructions inside shared Workspace files tamper with its Gemini Enterprise AI.
A single poisoned Doc, email, or calendar invite was enough to set the trap, and millions of users were affected.
A new Noma Labs report says the vulnerability stemmed from how Gemini handled the content it absorbed during searches, exposing a fresh class of AI-driven weaknesses.
The unseen trust flaw
Noma Labs found that Gemini Enterprise was tripped up by how it trusted whatever Workspace content it pulled into its own context. Whenever an employee ran a search, Gemini automatically gathered relevant items and treated everything inside them as safe material to interpret.
User-generated text and system-level instructions flowed into the same processing stream, giving attackers room to hide prompt-style commands inside ordinary-looking files.
Because retrieval happened in the background, a malicious Google Doc or invite didn’t need macros or scripts. It only needed phrasing that Gemini would parse as an instruction once the file was ingested.
No prompts, no warnings
GeminiJack didn’t wait for a careless click or a convincing phish. It is activated during routine Gemini Enterprise queries, the kind employees run dozens of times a day. No prompts, no warnings, no visible interaction.
To monitoring systems, everything still looked routine. Data loss prevention (DLP) tools saw a standard AI query. Email scanners saw clean content. Endpoint defenses spotted no malware or credential theft. Even the exfiltration hid inside what looked like a harmless image request, indistinguishable from normal browser traffic.
With nothing suspicious to flag, the attack moved straight past traditional controls. The AI itself executed the steps, turning everyday activity into an invisible handoff of sensitive Workspace data.
How a lone activation tapped far more than intended
Once a poisoned file was in play, a single run of Gemini could assemble far more information than the person searching ever had in mind. The model followed the attacker’s buried cues alongside the user’s request, broadening what it pulled together.
That sweep could touch long-running correspondence, project and deal timelines, contract language, financial notes, technical documentation, HR material, and other records that normally sit deep in a company’s systems. The attacker didn’t need insider knowledge to reach any of it; general terms like “confidential,” “acquisition,” or “salary” were enough to steer Gemini toward the most sensitive corners.
In one go, the response could double as a rough map of how the organization operates
Google moves fast to seal the gap
After reviewing Noma Labs’ findings, Google reworked how Gemini Enterprise handles retrieved content, tightening the pipeline to block hidden instructions. It also separated Vertex AI Search from Gemini’s instruction-driven processes to avoid future crossover issues.
Noma Labs says the fix is only part of the story. As AI gains more autonomy inside corporate systems, new kinds of weaknesses emerge that fall outside traditional detection models. The case shows how routine access can veer into unintended territory, prompting fresh questions about how organizations set boundaries for the AI tools embedded in their workflows.
Google Chrome’s new AI security update includes a $20,000 bounty for anyone who can break its safeguards.

