So, you thought your shiny new AI browser assistant was your clever, private co-pilot? Think again. It turns out it might just be a back door for anyone who knows how to craft a dodgy URL. A new vulnerability, cheekily named ‘HashJack’, has exposed a gaping hole in how some of the biggest names in tech are implementing AI, and it’s a stark reminder of a dirty little secret the industry doesn’t want to talk about: the chaos lurking at the digital edge.
This isn’t just about one bug. This is about a fundamental, and frankly, lazy, approach to security that crumbles the moment we move past the sanitised world of the data centre. As we rush to put AI on every device imaginable—from your phone to your smart toaster—we’re building a digital world on a foundation of sand. And Google’s response to this particular problem? Well, let’s just say it’s not going to fill you with confidence.
What on Earth is AI Edge Security Anyway?
Let’s back up. For quite a while now, “the cloud” has been the centre of the universe. Big, powerful AI models lived in massive data centres owned by Amazon, Google, and Microsoft. But that’s changing. Sending every single request to the cloud is slow and expensive. The new game is AI edge security – running smaller, faster AI models directly on your devices, or “at the edge” of the network.
Think of it like having a personal chef in your kitchen instead of ordering from a restaurant ten miles away. It’s quicker, more private, and tailored to you. This is fantastic for everything from real-time language translation on your phone to smart factory robots making instant decisions. But here’s the catch: your kitchen doesn’t have the same security as a fortified restaurant. Every single one of these edge devices is a new doorway for attackers. Securing one massive data centre is hard enough; securing billions of smart devices is a whole different beast.
The Only Sane Path Forward: Zero-Trust AI
In the old days, cybersecurity was like a castle with a moat. You focused on keeping baddies out. Once they were inside, though, they could often roam freely. This is a spectacularly daft model for a world of interconnected devices. The modern, and correct, philosophy is zero-trust AI.
The principle is simple: trust nothing and no one, ever. Assume any device, user, or application could be compromised. Every single request, whether from inside or outside the network, must be verified before it’s granted access. It’s like having a bouncer check your ID not just at the front door of the club, but also at the bar, at the entrance to the dance floor, and on your way to the loo. Annoying? Perhaps. Secure? Absolutely.
For AI, this means that the model itself is built with suspicion. It constantly validates its inputs and scrutinises its own outputs, ensuring it hasn’t been tricked into doing something it shouldn’t. This isn’t just a feature; it’s a foundational necessity for any AI operating outside the data centre fortress.
The Ticking Time Bomb of IoT
Right now, your world is filled with Internet of Things (IoT) devices. Smart watches, TVs, cameras, thermostats, even light bulbs. The analyst firm Statista projects there will be over 29 billion of them by 2030. Each one is a tiny, often poorly secured, computer connected to the internet. Effective IoT vulnerability management isn’t just a good idea; it’s the only thing standing between a functioning smart home and a botnet-controlled digital zombie.
AI is supposed to be our saviour here. AI-driven tools can constantly scan these device ecosystems, hunting for weaknesses and patching them before attackers can strike. But the irony is that the AI assistants we’re now embedding into our browsers and phones are creating new vulnerabilities even as other AIs try to fix old ones. Which brings us to the mess that is HashJack.
When Defence Must Be Everywhere: Distributed Threat Detection
Because the threat is now everywhere—on every smart device, in every home and office—our defence systems can’t be centralised. This is where distributed threat detection comes in. Instead of one central brain trying to watch everything, you have a network of smaller, coordinated guards.
Each device or local network segment uses its own AI to monitor for suspicious activity. When one guard spots something unusual, it doesn’t just deal with it locally; it alerts the entire network. This collective intelligence allows the system to spot large-scale, coordinated attacks that would be invisible to any single point of defence. It’s the digital equivalent of a neighbourhood watch scheme, but one where everyone is a highly-trained, AI-powered sentinel who never sleeps.
The HashJack Exposé: How Your AI Assistant Got Played
So, let’s get down to the dirty details. Researchers at Cato Networks discovered a brilliantly simple attack they call HashJack, which was first reported by HackRead. The attack targets AI assistants built into web browsers, like Microsoft’s Copilot and Google’s Gemini.
It works by exploiting the humble URL fragment—that little bit of a web address that comes after a ‘#’ symbol. Normally, your browser uses this fragment to jump to a specific part of a page and never sends it to the website’s server. It’s supposed to be a local, client-side instruction. But here’s the rub: the new AI assistants, in their eagerness to be helpful, were reading the entire URL, including the secret instructions hidden in the fragment.
Attackers realised they could embed malicious commands inside a URL fragment on a perfectly legitimate, trusted website. Send that link to a victim, and when they click it, their own browser’s AI assistant would read the hidden command and execute it. Attackers could steal credentials, feed users false information, or even use the AI’s more advanced “agentic” modes to automatically exfiltrate data from other open tabs. It’s a classic indirect prompt injection, weaponising trusted sites against their users.
Microsoft and Perplexity, when notified by Cato Networks back in mid-2024, did the responsible thing. They acknowledged the problem and patched their Copilot and Comet browsers by October and November 2024, respectively.
And Google? According to the report, Google’s response to the vulnerability disclosure for its Gemini-powered assistant was, and I quote: “Won’t Fix (Intended Behaviour)”.
Let that sink in. A known vulnerability that allows for credential theft and data exfiltration via their flagship AI product is considered “intended behaviour”. What does that even mean? Is it intended for the AI to blindly follow instructions hidden in a URL from an unknown source? This isn’t a technical oversight; it’s a jaw-dropping failure of security culture. It suggests a philosophy where adding “magical” AI features takes precedence over ensuring those features aren’t fundamentally insecure. As the role of AI in our digital lives grows, a topic extensively covered by outlets like Forbes, this kind of dismissive attitude from a tech giant is simply not good enough.
The Future is Edgy and Dangerous
This episode is more than just an embarrassing blunder. It’s a preview of the future of AI edge security. We are going to see a flood of these kinds of attacks, targeting the seams and gaps between different systems—the browser, the OS, the AI assistant, and the countless IoT devices they connect to.
The winners will be the organisations that embrace a zero-trust AI philosophy from the ground up and build robust systems for distributed threat detection. The losers will be those who, like Google in this case, seem to view security as an afterthought to be bolted on later, or worse, an inconvenience to be dismissed.
We are hurtling towards a future where AI is not in a distant cloud, but in your pocket, on your wrist, and in your car. It will have access to your most personal data and the ability to act on your behalf. An AI that can be hijacked by a cleverly crafted link is not a helpful assistant; it’s a liability.
So, the next time a company unveils a magical new AI feature, ask yourself: is this genuinely innovative, or is it just another “intended behaviour” waiting to be exploited? What other back doors are we leaving open in the race to make everything “smart”? Let me know your thoughts.


