What Exactly Is This “AI Systemic Risk”?
Let’s be clear. We aren’t talking about a rogue AI from a film plot draining bank accounts. The danger is far more subtle and, frankly, more plausible. AI systemic risk refers to the possibility that the widespread adoption of AI in finance could create or amplify shocks that destabilise the entire system, not just one institution. Think of it as a domino effect, only the dominoes are falling at machine speed.
A recent, sharp analysis published on CEPR’s VoxEU by a team including Stephen Cecchetti and Tuomas Peltonen highlights the core issue. They point out that AI doesn’t necessarily invent new financial vulnerabilities. Instead, it acts as a potent accelerant for old ones we already know and fear, such as liquidity mismatches and extreme interconnectedness.
The problem is that these AI systems, particularly the large foundation models like OpenAI’s GPT-4 or Google’s Gemini, are becoming foundational infrastructure. When every bank, hedge fund, and asset manager starts using similar models for risk assessment, trading, and loan origination, you create a dangerous monoculture. It’s like every farmer in the country planting the exact same strain of wheat. One specific blight, one unexpected pest, and the entire harvest is wiped out.
How Does AI Make a Fragile System Even More Brittle?
The financial system is already a deeply interconnected web. AI turns up the dial on three critical factors that can shatter financial stability.
– Model Uniformity: When everyone relies on the same few AI providers—think Microsoft’s Copilot or Anthropic’s Claude integrated into financial workflows—they are likely to get similar signals from their models. If an AI model interprets a market event as a ‘sell’ signal, it won’t be just one firm selling; it could be thousands, all at once. This creates a feedback loop, driving prices down in a self-fulfilling prophecy.
– Unbelievable Speed: AI-driven trading algorithms operate in microseconds. A flash crash, which used to be a rare and shocking event, could become a common feature if models overreact in unison faster than any human could intervene. This speed leaves no time for correction or thoughtful oversight.
– Opacity and Complexity: Do the bankers deploying these models truly understand how they arrive at their conclusions? Often, the answer is no. These “black box” systems present an enormous challenge for both internal risk management and external regulators. How can you regulate something you can’t fully comprehend?
Can Our Old Rules Handle These New Players?
This brings us to the thorny issue of prudential regulation. The rules we have today, like the Basel III accords, were designed for a world of human decisions and slower-moving capital. They are fundamentally unprepared for the speed and scale of AI-driven finance.
Forcing a bank to hold more capital against risky assets works when you can clearly define and measure that risk. But how do you calculate the capital requirement for a portfolio managed by an opaque algorithm that’s constantly learning and changing its own strategy? The current framework is like trying to measure the speed of a bullet with a sundial.
Cecchetti and his co-authors rightly argue that we need significant adjustments. This isn’t about small tweaks; it’s about rethinking the philosophy of financial oversight. We need to consider ideas like mandatory transparency labels for AI-driven financial products and “skin in the game” rules for AI developers, holding them partly accountable for the financial outcomes their models produce.
A Global Problem Needs a Global Answer
This isn’t a challenge one country can solve alone. If the UK imposes strict AI policy on its financial sector but the US or Singapore does not, capital and talent will simply flow to the less-regulated environment. We’d be creating loopholes for risk to fester, only to have it spill back across borders when a crisis inevitably hits.
The urgency for cross-border supervisory cooperation cannot be overstated. Regulators from different jurisdictions need to be speaking the same language and designing compatible rules. Just as we have international agreements on trade and climate, we need a coordinated approach to managing AI systemic risk. The ghost of 2008 should be a constant reminder of how a crisis that starts in one market can set the entire world ablaze.
Thinking about historical precedents, the 2008 Global Financial Crisis was fuelled by complex, opaque financial instruments (CDOs) that few truly understood, coupled with a herd mentality. Does that sound familiar? AI is poised to create an even more complex, even more opaque, and infinitely faster version of that same dynamic.
So, What’s the Path Forward?
It’s easy to feel a bit of despair. The challenge is immense, the technology is advancing at a blistering pace, and our regulatory bodies are, by their very nature, slow and deliberative. Daron Acemoglu’s research, cited in the CEPR piece, estimates AI’s impact on US productivity at a modest 0.05% to 0.06% per year over the next decade. Is that small gain worth risking the stability of our entire financial system?
The answer isn’t to ban AI in finance. The competitive pressures are too great, and the potential benefits, if managed correctly, are real. The path forward requires a multi-pronged strategy built on a new AI policy framework.
– Regulators must get technical. They need to hire data scientists and machine learning experts to build their own capacity to audit and understand these systems.
– AI developers must accept accountability. The “we just build the tools” argument won’t wash when those tools are managing trillions of pounds in global assets.
– Financial institutions need a culture of healthy scepticism. They must challenge the outputs of their AI models and maintain robust human oversight as the ultimate circuit-breaker.
This is a collective action problem. We’re at a critical juncture where the decisions we make about governing AI will have consequences for decades. We have the benefit of hindsight from previous financial crises. The question is, are we smart enough to use it this time? Or are we destined to let our own creations lead us into another mess, just a whole lot faster?


