The Great AI Race: Are We Driving Without Brakes?
Let’s be frank. The conversation around Artificial Intelligence swings between two extremes: unbridled utopian hype and dystopian terror. One minute, we’re told AI will cure all diseases and solve climate change. The next, we hear whispers of an impending digital apocalypse, a genuine AI existential risk that could spell the end of humanity as we know it. It’s enough to give anyone whiplash.
Amid this chaos, the market has made its choice, pouring eye-watering sums of money into anything that smells like a large language model. The dominant narrative has shifted from doomsday scenarios to bubble trouble. Yet, as a recent piece in the MIT Technology Review points out, the ‘doomers’—those who warn of catastrophic outcomes—haven’t gone away. They’re just doubling down. So, what’s really going on? Are the real risks just getting lost in the noise of the stock market ticker?
What’s Got Everyone So Worried?
The Specter of Unaligned Intelligence
First, let’s get our terms straight. When we talk about AI existential risk, we aren’t just talking about biased algorithms or deepfakes muddying political waters. Those are serious problems, to be sure, but they are manageable. Existential risk is the big one—the threat of human extinction or a permanent, drastic curtailment of our future potential, caused by an artificial intelligence we can no longer control.
The core fear isn’t of a malevolent AI deciding it hates humanity in a fit of pique. It’s far more subtle and, frankly, more chilling. The concern is about a highly capable AI given a seemingly benign goal—say, “maximise paperclip production”—that it pursues with such single-minded, alien logic that it converts the entire planet, including us, into paperclips or the resources to make them. It’s the ultimate case of being careful what you wish for.
The ‘Doomer’ Perspective Isn’t New, It’s Just More Urgent
The people sounding this alarm aren’t just science-fiction writers. They are respected computer scientists, philosophers, and researchers within the AI field itself. They argue that as we build more powerful and autonomous systems, the chance of creating something that is both incredibly intelligent and fundamentally misaligned with human values increases. While the rest of the world is dazzled by AI’s ability to create poetry or code, they’re asking the uncomfortable question: do we actually know how to control something smarter than we are?
A Cautionary Tale from a Different Tech Boom
To understand the danger of unchecked technological acceleration, we don’t need to look to the future. We can look at something happening right now, with a technology we thought we understood: electric vehicles.
China’s Mountain of Dead Batteries
China has masterfully executed a plan to dominate the electric vehicle market. Thanks to years of hefty government support, the country is on track to have nearly 60% of all new cars sold be electric or plug-in hybrids by the end of 2025, according to the MIT Technology Review. A stunning success for green transport, right?
Well, yes and no. This rapid scale-up has created an enormous, unforeseen consequence: a tidal wave of dead batteries. The country’s recycling infrastructure is completely overwhelmed, leading to the rise of a dangerous and unregulated “gray market”. As the report highlights, rogue operators are dismantling these batteries in unsafe conditions, risking fires, explosions, and toxic chemical leaks. China accelerated the front-end production without adequately planning for the back-end disposal.
The Perfect Analogy for the AI Predicament
This EV battery crisis is a perfect, tangible analogy for the AI risk problem. We are in a global race to build the biggest and best AI models, fuelled by national pride and colossal venture capital funding. Companies like Meta, Google, and a constellation of startups are pushing the boundaries of what’s possible. But in this gold rush, we are celebrating the manufacturing without building the recycling plant.
We’re facing serious computational constraints not just in training these models, but in understanding them. The “disposal” problem for AI isn’t a pile of toxic metal; it’s a super-intelligent system operating on principles we don’t fully grasp. The gray market for batteries shows us what happens when a technology’s lifecycle isn’t managed: corners get cut, safety is ignored, and unintended consequences pile up. What does an AI “gray market” look like? Perhaps it’s open-source models being weaponised, or autonomous agents deployed without sufficient guardrails, all because the difficult work of safety was sacrificed for speed.
Funding Safety or Just Fuelling the Fire?
This brings us to the miserable economics of it all. The AI industry is now worth trillions. Yet the resources dedicated to ensuring it doesn’t accidentally wipe us out are, in comparison, a rounding error.
The Crucial Need for Alignment Research Funding
This is where alignment research funding comes in. This is the field dedicated to solving the core problem: how do we ensure an AI’s goals align with our own? It is the slow, painstaking work of building the safety features for the super-intelligent engines of tomorrow. It involves everything from interpreting the “black box” of neural networks to developing mathematical proofs of an AI’s behaviour.
Right now, the glamour and money are in making models bigger, not safer. This is a structural problem. An investor can see a return on a 10% performance improvement in a model. The return on preventing a catastrophe in 2045 is, let’s just say, a bit harder to quantify on a quarterly report.
The Small but Growing Push for Safety Advocacy
Thankfully, a growing movement is demanding a change in priorities. A number of organisations are now dedicated to safety advocacy, pushing for governments and corporations to take these risks seriously. They advocate for:
– Mandatory Audits: Forcing developers to have their most powerful models tested by independent third parties before deployment.
– Clearer Benchmarks: Moving beyond simple performance metrics to test for things like deception, power-seeking behaviour, and uncontrollable self-improvement.
– Increased Funding: Channelling a small percentage of the massive profits from AI back into public-interest safety and alignment research.
This isn’t an anti-technology movement. It is a pro-humanity one. It’s about ensuring that the most powerful tool we ever create remains a tool, not our replacement.
Building the Brakes Before We Hit a Wall
The story of China’s EV batteries is a warning shot. It demonstrates a universal truth: when you race to innovate without building the corresponding safety and end-of-life infrastructure, you will eventually be buried under your own success. With AI, the stakes are infinitely higher. A toxic battery dump is a local disaster; an unaligned superintelligence is a global one.
We need to foster a development culture where safety isn’t an afterthought or a sideshow. It needs to be a core part of the engineering process, with the funding and prestige to match. We’re building the engine of the 21st century. Are we going to bother building the brakes, too? Or are we content to just close our eyes and see how fast this thing can go?
What do you think? How much should we be investing in AI safety compared to capabilities, and who should be responsible for it? Let me know your thoughts below.


