Google has dismissed viral social media rumors accusing the company of secretly using users’ Gmail messages to train its Gemini AI model, calling the circulating claims “false” and “misleading.”
The controversy surfaced earlier this week when posts on X alleged that Gmail users had been “automatically opted in” to allow Google to access private emails and attachments for AI training. The posts advised users to turn off Gmail’s Smart Features — long-standing tools that help filter messages and organize inboxes.
In a statement shared with The Verge, a Google spokesperson clarified that the company has “not changed anyone’s settings” and emphasized:
“Gmail Smart Features have existed for many years, and we do not use your Gmail content for training our Gemini AI model.”
The company further noted that its recent privacy policy updates appear to have been misinterpreted, sparking what Google describes as unnecessary panic.
Some users, however, reacted strongly online. One X user called the situation “the largest consent manufacturing operation in history.” Antivirus maker Malwarebytes even repeated the claims in a blog post before later updating its article, calling the situation “a perfect storm of misunderstanding.”
What Google’s privacy policy actually allows
According to Google Workspace’s privacy policy:
-
Data you type directly into Gemini (e.g., prompts) may be retained and can be used to improve AI models.
-
Data from Workspace apps like Gmail, Docs, and Sheets is not used to train Gemini unless the user explicitly instructs the AI to access it.
For example, Gemini can access your Google Docs if you ask it to proofread text — but it does not automatically scan Gmail or other Workspace apps for training material.
Recurring rumours about Gmail in 2025
This is not the first time Gmail has faced misinformation this year. In September, a series of viral posts falsely claimed that Google had issued a global security warning urging users to change their passwords immediately.
With Gmail holding more than 2.5 billion users worldwide, such rumors tend to spread quickly and fuel broader concerns about privacy and big tech’s data practices.
While the Gmail claims are false, experts say the broader anxiety stems from a real trend: more tech companies are exploring the use of user data for AI development.
In regions with strict privacy laws — such as the European Union — companies like Meta and LinkedIn have already begun disclosing that some user data may be used for AI training.
Adding to public distrust, Google agreed in May 2025 to pay $1.375 billion to settle allegations that it harvested Texans’ biometric data without consent.







