A new report from the cybersecurity firm Wiz has thrown a rather large, cold bucket of water on the AI hype parade. Their research, as highlighted by a recent article in Artificial Intelligence News, found that a staggering 65 percent of the 50 leading AI firms had leaked sensitive credentials like API keys and tokens on public platforms like GitHub. We’re not talking about small-fry startups working out of a garage. These are companies with a combined valuation of over $400 billion, the darlings of venture capital, the very architects of the next technological revolution. The core issue isn’t some super-sophisticated new hacking technique; it’s a fundamental breakdown in cyber risk management.
The Scale of the Digital Leakage
A Golden Ticket for Attackers
Right now, the prevailing wisdom in Silicon Valley and beyond seems to be “move fast and break things”. The problem is, the things being broken are the essential trust and security protocols that underpin the entire digital economy. According to Wiz, these leaks aren’t trivial. They are active, verified secrets that could give an attacker a “golden ticket” into the very heart of these organisations. As Glyn Morgan of Salt Security aptly put it, “By pushing credentials into code repositories they hand attackers a golden ticket to systems, data, and models”.
Imagine gaining access to a company’s private code repositories. You wouldn’t just see the code; you’d see the company’s DNA. You could map out their organisational structure, pinpoint key engineers, and, most terrifyingly, potentially access or manipulate their proprietary AI models and the training data that fuels them. In one instance unearthed by the researchers, a single leaked token “allow[ed] access to about 1K private models”. This isn’t a theoretical risk; it’s an open invitation to corporate espionage, data theft, and model poisoning on an industrial scale. The implications for AI security vulnerabilities are profound.
Why Aren’t We Catching This?
So, the billion-dollar question is: how is this happening? Don’t these companies, flush with cash and supposedly staffed by the brightest minds on the planet, have security scanners? Well, yes, but it turns out many of them are using the equivalent of a chocolate teapot. Traditional security scanning methods are proving woefully inadequate for the way modern AI development works.
It’s a bit like a security guard who only patrols the front entrance of a skyscraper but never checks the underground car park, the service tunnels, or the executive washrooms. The current tools are good at scanning the ‘surface’ of a code repository – the most recent version of the code. But they completely miss the digital breadcrumbs left behind in the full commit history. Every change a developer makes, every experiment, every “oops, I shouldn’t have put that there”, is logged forever in the repository’s history unless explicitly scrubbed. Attackers know this. They aren’t looking at the front door; they are rummaging through the digital dustbins, and they are finding treasure. This failure to perform deep inspection is a critical flaw in current cloud security postures.
The Root Causes of Our AI Insecurity
Scanning That’s a Mile Wide and an Inch Deep
The Wiz report identifies three critical failings in how companies approach scanning for secrets: Depth, Perimeter, and Coverage. It’s a simple framework, but one that exposes the systemic nature of the problem.
Depth: As mentioned, most tools only look at the latest code. They don’t perform a ‘deep scan’ of the entire commit history of a project. A developer might accidentally commit an API key protection failure, realise their mistake five minutes later, and remove it. To a surface-level scan, everything looks fine. But to an attacker with the right tools, that key is forever etched into the project’s history, ready to be exploited.
Perimeter: The problem isn’t just confined to a company’s main, official repository. Developers often ‘fork’ a repository, creating a personal copy to work on. These forks often fall outside the purview of corporate security tools, becoming unmonitored digital wastelands where secrets can lie exposed indefinitely. The security perimeter has to extend beyond the corporate walls to wherever the code lives.
Coverage: The nature of secrets is changing. It’s not just about traditional AWS keys or database passwords anymore. The AI world has created a whole new lexicon of sensitive tokens for services like HuggingFace, Weights & Biases, and ElevenLabs. If your security scanner doesn’t know what these new AI-specific keys look like, it can’t find them. You can’t protect against a threat you can’t see.
This isn’t just about bad tools; it’s about a lack of strategic thinking. The speed of AI development has outpaced the evolution of the security practices meant to protect it, leaving a trail of AI security vulnerabilities in its wake.
People, Process, and the Path of Least Resistance
Ultimately, technology problems are almost always people problems. Developers are under immense pressure to build and ship features at a breakneck pace. In this high-pressure environment, security can feel like a roadblock. Taking a shortcut—like temporarily hardcoding a key to test a feature—can seem like a harmless way to save time. The intention is rarely malicious, but the outcome is the same.
This is why the Wiz report’s first recommendation is so vital: treat employees as part of the attack surface. This isn’t about fostering a culture of suspicion. It’s about acknowledging human nature and building systems that make it easy to do the right thing and hard to do the wrong thing. It means implementing stricter policies, providing better tools for managing secrets, and embedding security experts directly into development teams. The responsibility for API key protection can’t just be a line in a handbook; it needs to be woven into the fabric of the development process. Furthermore, the report notes that “almost half of disclosures either failed to reach the target or received no response”, indicating a startling lack of established channels for reporting these critical flaws.
Forging a More Secure AI Future
It’s Time for a Security Overhaul
So, what’s the way forward? The answer isn’t to slow down innovation, but to make it smarter and more secure. The first step is to get serious about comprehensive secret scanning. This means investing in tools and processes that address the trifecta of Depth, Perimeter, and Coverage. Companies need to look beyond surface-level checks and implement deep scanning across their entire codebase history. They must expand their security perimeter to include all forks and developer-owned repositories. And they must constantly update their scanning tools to recognise the new generation of AI-specific secrets.
This isn’t just a technical fix; it’s a critical component of cyber risk management. For any board of directors or executive team, understanding the true state of their code hygiene should be as important as reviewing their quarterly financial statements. The potential for a single leaked key to cause catastrophic financial and reputational damage is simply too high to ignore. A robust cloud security strategy must account for these granular, developer-level risks.
Securing the Entire AI Supply Chain
The second, and perhaps more strategic, imperative is to extend security measures across the entire AI supply chain. Modern AI applications are not monolithic structures built from scratch. They are complex assemblies of third-party models, open-source libraries, and data APIs. Companies like LangChain, which helps developers chain together different AI components, and HuggingFace, the de facto hub for open-source models, are integral parts of this ecosystem.
A vulnerability in one of these foundational platforms can have a ripple effect, compromising the security of every application built on top of it. This is why a holistic approach to security is essential for building confidence in enterprise AI adoption. A large bank or healthcare provider can’t risk deploying an AI system if its underlying components, sourced from various startups, are riddled with security holes. We need industry-wide standards and collaborative efforts to ensure that security is a shared responsibility, not an afterthought. The security of a company like Perplexity or Groq is inextricably linked to the security of the an entire ecosystem.
The AI gold rush is well and truly on, but many are building their boomtowns on digital quicksand. The speed of innovation, so celebrated by venture capitalists and tech evangelists, cannot come at the expense of foundational security. This isn’t about stifling creativity; it’s about ensuring the incredible technologies being built are robust, trustworthy, and resilient. Ignoring these basic security principles is not just negligent; it is an existential threat to the long-term viability and adoption of artificial intelligence.
The question we must all ask is this: Is the frantic pressure for rapid AI innovation creating a security time bomb we can’t defuse? And when it inevitably goes off, who will be held accountable—the startup that cut corners, the VCs who demanded growth at all costs, or the enterprise that adopted the technology without conducting proper due diligence?
What are your thoughts on this?


