Did anyone really think that bolting a generative AI onto the world’s most used information portal would go off without a hitch? Google, in its infinite wisdom and relentless race against OpenAI, has rolled out its AI Overviews, and surprise, surprise—it’s already being used to scam people. The very tool designed to give us instant, summarised answers is now a shiny new vector for old-school fraud. These are not just teething problems; they are fundamental AI search vulnerabilities that expose a critical flaw in how big tech is building our supposed future.
What Are AI Search Vulnerabilities?
So, what are we actually talking about here? In simple terms, AI search vulnerabilities are the cracks in the armour of these new AI-driven search systems. Because these models are designed to scrape, synthesise, and present information from across the web, they can be tricked. They don’t ‘understand’ information in the way a human does; they recognise patterns and aggregate data.
Think of it like an over-eager assistant told to summarise a pile of documents. If a prankster slips a fake memo with a bogus phone number into the middle of the pile, the assistant, focused only on speed and efficiency, might just include it in the final brief. This is essentially what’s happening. The AI model pulls data from sources, including low-profile, scam-riddled websites, and presents it with the same authority as information from an official source. This is the core vulnerability: a failure to distinguish between credible and malicious information.
The Role of Malicious Prompt Engineering
This is where things get a bit more deliberate. Scammers aren’t just getting lucky; they’re actively exploiting these systems using what’s known as malicious prompt engineering. This is the art of crafting online content—hidden text, forum posts, obscure web pages—specifically designed to be picked up and prioritised by a web-scraping AI.
These bad actors are effectively reverse-engineering what the AI looks for. They know the AI is hungry for concise answers, like a contact number for a company’s customer service. So, they seed the web with fake ones. According to a recent report in WIRED, scammers are successfully planting fraudulent phone numbers on obscure corners of the internet, which Google’s AI then dutifully scrapes and presents at the top of a search result. It’s a classic bait-and-switch, supercharged by a system we’re being told to trust implicitly.
Case Study: Google AI Overviews
Overview of Google’s AI Overviews Feature
Google’s AI Overviews is meant to be the next leap forward in search. Instead of just giving you a list of blue links, it aims to provide a direct, synthesised answer to your query, neatly packaged at the very top of the page. Searching for “how to fix a leaky tap”? The AI Overview might give you a step-by-step guide compiled from various hardware and DIY websites. It’s designed for convenience, a shortcut to knowledge.
But this shortcut has proven to be a direct route to danger for some.
Scams Exploiting AI Overviews
The primary scam making the rounds is stunningly simple yet brutally effective. A user searches for the customer support number for their bank, airline, or a tech company. The Google AI Overview then confidently displays a phone number in its summary box. The problem? That number doesn’t belong to the company. It belongs to a scammer.
As detailed by multiple sources, including The Washington Post, victims who call these numbers are connected to criminals impersonating legitimate support staff. These scammers then attempt to trick the caller into handing over bank details, passwords, or other sensitive personal information. Credit unions and banks have already started issuing warnings to their customers. Google’s response? A PR-friendly statement saying their “anti-spam protections are highly effective” while simultaneously advising users to double-check numbers. You can’t have it both ways.
Trust Verification Systems as a Defence
Importance of Trust Verification
For two decades, Google’s entire business model has been built on a foundation of trust. We trusted that the blue links at the top were there for a reason—relevance and authority. By introducing an AI that cannot reliably distinguish between a real support number and a scammer’s burner phone, Google is taking a sledgehammer to that foundation.
This is why robust trust verification systems are not a ‘nice-to-have’; they are an absolute necessity. These systems are the gatekeepers, the protocols that should be in place to weigh the credibility of a source before its information is presented as fact. An official website should have an infinitely higher trust score than a random post on a 10-year-old forum. Right now, it seems Google’s AI is failing this basic credibility test.
Measures to Combat SERP Manipulation
The fight against having search engine results pages (SERPs) manipulated is as old as search engines themselves. Search Engine Optimisation (SEO) has always had its dark side—’black hat’ techniques used to artificially boost rankings. What we’re seeing now is the next evolution: SERP manipulation defense in the age of AI.
Mitigating this requires a multi-layered approach.
– Source Prioritisation: The AI must be explicitly programmed to give overwhelming preference to official, verified sources (like a company’s registered domain) when presenting critical data like contact information.
– Cross-Referencing: Instead of relying on a single data point, the system should be required to cross-reference contact details across multiple high-authority sources before displaying them.
– User Feedback Loops: Provide a clear and immediate way for users to flag incorrect or malicious information within an AI Overview, with those reports being fed directly into the trust algorithm.
Enhancing Digital Literacy Protocols
The Need for Digital Literacy
While we can—and should—demand that Google fix its broken product, we also have to face a new reality. The internet is now a place where you can’t even trust the search engine to give you a correct phone number. This calls for an urgent upgrade in our collective digital literacy protocols.
Digital literacy is no longer just about knowing how to use a computer. It’s about developing a critical mindset—a healthy dose of scepticism—towards the information presented to us, especially when it’s generated by an algorithm. We need to teach ourselves, and the next generation, that convenience often comes at the cost of verification.
Guidelines for Users
So, what can you actually do? Here are some practical steps to protect yourself:
– Never Trust, Always Verify: Treat any phone number, email address, or critical piece of information from an AI Overview as unverified. It’s a suggestion, not a fact.
– Go to the Source: When you need a customer service number, navigate directly to the company’s official website. Type the URL yourself or use a trusted bookmark. Don’t rely on the search summary.
– Use Official Apps: For services like banking, use the official mobile app, which will have secure contact methods built-in.
– Be Sceptical of Urgency: Scammers thrive on creating a sense of panic. If a “support agent” is pressuring you to act quickly, it’s a massive red flag. Hang up.
Looking Ahead
The uncomfortable truth is that these AI search vulnerabilities are not going away. As AI models become more integrated into our daily information diet, the attack surface for malicious actors will only grow. We are at the very beginning of a new cat-and-mouse game between tech platforms and those who seek to exploit them. For now, Google’s AI Overviews cannot be disabled, so vigilance is our primary defence.
This entire episode is a stark reminder that in the rush to innovate, core principles like safety and trust are often treated as afterthoughts. We, the users, are the beta testers in an experiment we never signed up for. The real question is, how much damage will be done before these tech giants prioritise our security over their race for AI dominance?
What are your thoughts? Have you encountered suspicious information in an AI-generated search result? Share your experience in the comments below.


