Google turns to AI to clean up the AI mess
Artificial intelligence has become a major enabler for online spammers and scammers, but tech giant Google is increasingly using the same technology to counter the threat.
Google turns to AI to clean up the AI mess
Artificial intelligence has become a major enabler for online spammers and scammers, but tech giant Google is increasingly using the same technology to counter the threat.
From fake advertisements promoting miracle herbal cures to AI-generated videos using celebrity-like voices, users are frequently exposed to sophisticated spam and scam content online—much of it created with generative AI.
Experts say the rise of accessible AI tools has worsened a long-standing internet problem. “It’s not that this is a new problem. It is an old problem, supercharged,” said Nate Elliott, principal analyst at Emarketer, adding that AI has dramatically increased both the speed and scale of operations for both legitimate users and criminals.
According to the FBI’s Internet Crime Report, more than 22,000 complaints involving AI-related scams were recorded last year, with losses exceeding $893 million.
In its annual ads safety report, Google said its AI systems are playing a key role in tackling the issue. The company said its generative AI tool Gemini blocked over 99% of policy-violating ads before they reached users.
In 2025, Google removed or blocked over 8.3 billion ads, including 602 million linked to scams, while suspending about 24.9 million advertiser accounts, more than 4 million of them for scam-related activity.
Google, which earned over $200 billion in global ad revenue last year, said thousands of employees support its advertising safety systems. Company executive Keerat Sharma said Gemini now helps analyse hundreds of billions of signals, including user behaviour and campaign patterns, to detect malicious intent more accurately while reducing wrongful suspensions by 80%.
Sharma added that AI has also improved speed, allowing ad analysis within milliseconds. Experts, however, believe the battle between AI-driven scams and AI-based defences will continue, with University of Wisconsin-Madison’s Matt Seitz saying the problem is now too large for humans alone to manage.