It seems the Wild West era of enterprise AI might finally be drawing to a close. For what feels like an eternity, we’ve watched companies tentatively play with AI agents in sandboxes, too frightened to let them handle anything truly important. Why? Because connecting these clever bits of code to actual, sensitive company data has been a security nightmare waiting to happen. But a recent update to the Model Context Protocol (MCP), backed by the unholy trinity of AWS, Microsoft, and Google Cloud, suggests the sheriffs are finally in town.
Spearheaded by AI darling Anthropic, this isn’t just another boring spec update. This is a foundational shift, moving agentic AI from a fascinating but risky experiment into something that Chief Information Security Officers (CISOs) might actually sign off on. The core of this evolution is a much-needed focus on MCP protocol security, finally addressing the operational chaos that has kept these powerful tools locked in pilot programmes.
So, What is this MCP Thing Anyway?
Think of the Model Context Protocol as a universal translator and security guard for AI agents. Before MCP, getting an AI agent to talk to different databases, APIs, and software services was a custom, brittle affair. It was like trying to plug a British hoover into a European socket, then a Japanese one, then an American one – a chaotic mess of adapters and potential short circuits.
MCP creates a standardised way for these agents to request and receive information, or ‘context’. As highlighted by a recent report from Artificial Intelligence News, the protocol is designed to handle the complex, long-running tasks that enterprises actually need, like processing an insurance claim from start to finish. This isn’t just about asking an AI to write a poem; it’s about giving it a secure, long-term memory and the tools to do a proper job.
The latest specification beefs this up significantly, introducing features for these extended workflows and, crucially, much tighter security controls. It’s the infrastructure layer that was always missing.
Hardening the Gates: Security Gets Real
Let’s be honest, the initial state of MCP security was… porous. One report found that security researchers discovered around 1,800 MCP servers exposed on the public internet by mid-2025. This is the kind of statistic that gives executives sleepless nights. The new specification tackles this head-on, baking in principles that should have been there from the start.
The headline act is the formal integration of a zero-trust architecture. The old model of security was like a castle with a moat: once you were inside, you could roam freely. Zero trust assumes the bad guys are already inside the walls. Every single request, whether from a user or an AI agent, must be verified before access is granted. For an AI agent wanting to access customer data, this means it has to prove its identity and authorisation for that specific piece of data, every single time. It’s a pain, but a necessary one.
This update isn’t happening in a vacuum. Industry players are taking this seriously. Harish Peri, an SVP at the identity management firm Okta, has noted the critical importance of secure data access protocols like this for enterprise adoption. It’s a signal that the adults have entered the conversation.
The Rise of the AI Security Guards
Beyond just locking things down, the new MCP framework paves the way for something far more dynamic: autonomic defense systems. This is the really interesting part. We’re talking about using AI to defend AI. These are systems that can monitor the behaviour of other AI agents in real-time, spot anomalies, and automatically shut down threats before a human even knows something is wrong.
– Is an agent suddenly trying to access a payroll database it has never touched before? Block it.
– Is it making an unusual number of requests in the middle of the night? Isolate it.
– Is it trying to send data to an unknown external address? Terminate it.
This is proactive, intelligent defence, and MCP provides the hooks for these autonomic systems to plug into the agent ecosystem. It’s the logical next step in securing systems that operate at a speed and scale no human security team could possibly manage.
Thinking Like the Enemy: AI Threat Modeling
Of course, you can’t build a good defence without understanding the attack. This is where AI threat modeling comes into play. It’s the process of systematically thinking through all the ways your shiny new AI system could be broken, tricked, or turned against you.
With the standardisation that MCP brings, we can start to build more consistent threat models. Instead of every company inventing its own bespoke security theatre, they can now work from a common playbook. The questions become more focused:
– How could an attacker poison the context data being fed to an agent via MCP?
– Could a compromised agent use MCP to move laterally across our network?
– What happens if an attacker manages to register a malicious server in the MCP registry?
According to Mayur Upadhyaya, CEO of APIContext, identifying these new vulnerabilities is a top priority. As the MCP registry has exploded by over 407% since last September, the attack surface has grown with it. Standardisation is great for builders, but it’s also a wonderfully predictable target for attackers.
The Cloud Giants Place Their Bets
Why are AWS, Microsoft, and Google Cloud all piling in to support this? Simple: they are in a brutal war to own the next generation of computing, and enterprise AI is the grand prize. By backing a single, open protocol, they are trying to commoditise the agent-to-infrastructure link. They don’t want to compete on whose proprietary connector is better; they want to compete on who provides the best, most secure, and most powerful cloud platform for these agents to run on.
Microsoft’s move to add native MCP support directly into Windows 11 is a massive tell. It’s a declaration that they see AI agents not as some niche cloud service, but as a fundamental part of the future operating system. It mirrors their strategy with the internet in the 90s. The sheer scale of the infrastructure being built, like OpenAI’s rumoured multi-gigawatt ‘Stargate’ supercomputer, shows that the commitment is real and the stakes are astronomically high.
This collaboration is good for standardisation, but it also centralises incredible power. The security decisions made by this small group of companies will shape the safety and reliability of enterprise AI for everyone. Are we comfortable with that?
The transition from experimental to production-ready AI isn’t just about better algorithms; it’s about building the boring, reliable, and secure plumbing. The latest MCP update is arguably the most important piece of that plumbing yet. It’s the moment MCP protocol security became a real conversation, rather than a line item on a future wish list.
The path is now clearer for companies to move their AI initiatives out of the lab. But this new standard also creates new challenges. A standard target is an attractive one. The real test will be in the implementation and the vigilance of the organisations deploying it.
What new security blind spots do you think will emerge as MCP becomes widespread? How will smaller companies keep up with the security demands of this new AI ecosystem? The groundwork has been laid, but the a hard work is just beginning.


