A Gemini-powered scam tricks Gmail users with fake AI alerts by hijacking summaries using hidden prompts. Experts urge caution and manual email checks.
A recently revealed phishing method is exposing Gmail users to significant danger by exploiting Google’s Gemini AI summarization tool. Phishers are injecting secret prompts into emails using HTML and CSS illusions—white text or zero font size, for example—that are indistinguishable to the naked eye but completely legible to Gemini.
When users are tricked into clicking “Summarize with Gemini,” the AI reads these hidden instructions and produces a deceptive summary, typically warning of a phantom security breach and telling users to call an illegitimate support hotline. This technique, prompt injection, evades conventional spam filters and uses user faith in AI-created text to carry out exceptionally sophisticated scams.
Experts See Increasing Danger as Google Rushes to React
Mozilla’s 0Din team security researchers discovered the vulnerability and showed how simple it was to manipulate Gemini to generate misleading summaries. Google has confirmed the problem and is currently implementing mitigation techniques, such as filtering of obfuscated content and red-team testing of Gemini outputs.
Yet a complete patch has not been issued, leaving users vulnerable to this new vector of attack. Users are told to avoid summarizing suspicious messages by Gmail experts, look at messages by hand, and turn on two-factor authentication to protect their accounts. As AI solutions become ever more integrated into daily workflows, this event underscores the need for strong protection measures and user vigilance around AI-generated content.