Google has issued a "red alert" to 1.8billion users of its email service, Gmail, over a new artificial intelligence scam reportedly being employed by cyber criminals. According to tech expert Scott Polderman the data-stealing scam involves another Google product, Gemini, an AI assistant known as a chatbot.
"So hackers have figured out a way to use Gemini - Google's own AI - against itself," he explained. "Basically, hackers are sending an email with a hidden message to Gemini to reveal your passwords without you even knowing." Scott went on to add that this scam differs from those in the past as it is "AI against AI" and could set a precedent for future attacks in the same way.
He elaborated: "These hidden instructions are getting AI to work against itself and have you reveal your login and password information." Scott continued, detailing why so many users are falling foul of the problem.
READ MORE: Prince William halts summer break to make huge announcement
READ MORE: You have a high IQ if you can spot goat's owner hiding in plain sight
"There is no link that you have to click [to activate the scam]," he said. "It's Gemini popping up and letting you know you are at risk."
Scott also advised that Google has previously gone on record to say it will "never ask" for your login information or "never alert" you of fraud through Gemini.
Meanwhile, tech expert Marco Figueroa added that hackers achieve success by setting the font size to zero and the text colour to white within their emails, before inserting prompts that are invisible to users but ones that are still picked up by Gemini.
Writing in response, one TikTok shared additional advice to help protect you from the scam. "To disable Google Gemini's features within your Gmail account, you need to adjust your Google Workspace settings," they wrote.
"This involves turning off 'SMART FEATURES' and potentially disabling the Gemini app and its integration within other Google products."
Another revealed: "I never use Gemini, still I might change my password just in case."
A third person exclaimed: "I’m sick of all of this already. I’m going back to pen and paper!"
And similarly a fourth shared: "I quit using Gmail a long time ago! Thank you for the alert! I’ll go check my old accounts."
Google warned in its security blog last month: "With the rapid adoption of generative AI, a new wave of threats is emerging across the industry with the aim of manipulating the AI systems themselves. One such emerging attack vector is indirect prompt injections.
"Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions.
"As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures."
However, it attempted to reassure users, adding: "Google has taken a layered security approach introducing security measures designed for each stage of the prompt lifecycle. From Gemini 2.5 model hardening, to purpose-built machine learning (ML) models detecting malicious instructions, to system-level safeguards, we are meaningfully elevating the difficulty, expense, and complexity faced by an attacker.
"This approach compels adversaries to resort to methods that are either more easily identified or demand greater resources."
You may also like
Liam Payne's sister's heartbreaking tribute on 15th anniversary of One Direction
James O'Brien issues grovelling apology live on LBC after antisemitism row
Mel C working on huge film as fulll Spice Girls reunion hits 'problem'
Greedy Ozzy Osbourne fans cashing in on star's death by selling 'free gift' from last gig
Patient numbers up 1,300% in 5 yrs at Hyderabad drug treatment clinic