The AI revolution we’re all living through isn’t powered by magic. It’s powered by electricity. A simply colossal amount of it. Every time you ask a chatbot to write a poem or generate an image of a cat riding a unicycle on the moon, you’re spinning a meter somewhere in a giant, windowless building packed with humming servers. The dirty little secret of the AI boom is that these servers generate an astonishing amount of heat, and dealing with that heat is fast becoming one of the biggest challenges in tech. This isn’t just an engineering problem; it’s a full-blown environmental one.
The race to build ever-more-powerful AI models has inadvertently created an arms race in energy consumption. We’re building digital brains that are getting exponentially smarter, but they’re also getting exponentially hotter. If we don’t figure out a better way to cool them down, our quest for artificial intelligence could have some very real, and very damaging, consequences for the planet. The entire conversation is shifting towards AI infrastructure sustainability, and not a moment too soon.
What is This ‘AI Infrastructure Sustainability’ Anyway?
It’s a bit of a mouthful, isn’t it? But what it boils down to is simple: can we build and run the physical hardware that underpins AI without completely wrecking our energy budget and environmental goals? For years, the tech mantra has been “faster, bigger, more.” Sustainability was often a footnote, a nice-to-have for the corporate social responsibility report. That is no longer a tenable position. The sheer scale of AI compute demand is forcing a change.
Think about the classic data centre. For decades, the solution to heat was basically just more air conditioning. You’d have rows of servers with fans whirring away, blasting cold air down aisles like a frozen food section in a supermarket. This brute-force approach worked fine when a server rack might pull a few kilowatts of power. But we’re not in that world any more. The demands of training and running large language models are pushing server rack densities to levels that were pure science fiction five years ago. Air conditioning is simply hitting a wall. It’s like trying to cool a blast furnace with a desk fan – you’re fighting a losing battle against physics, and burning a frightening amount of energy to do it.
Forget Air Con, It’s Time for a Swim
This is where liquid cooling comes in. Now, this isn’t a new concept; high-performance computing and gaming enthusiasts have been using water to cool their souped-up rigs for ages. The logic is straightforward: liquids, particularly water, are far more efficient at absorbing and transferring heat than air. What’s new is the urgent, industrial-scale demand for it. The latest chips from Nvidia and its rivals are so powerful and packed so densely that they can’t be effectively cooled by air alone. Moving to liquid is no longer an option; it’s an absolute necessity.
The standard way to do this is with something called a “cold plate.” It’s a metal block with internal channels that sits directly on top of the hot chip. Cool fluid is pumped through the channels, wicking heat away from the processor and carrying it out of the server to be cooled elsewhere. But even here, we’re hitting limitations. Traditional manufacturing methods, like machining two halves of a block and brazing them together, create seams. And seams are weak points. They impede heat transfer and can fail under the intense pressure and temperature cycles inside a modern server. 3D printing the plates is another route, but the resulting metal can be porous and less effective at conducting heat. When you’re dealing with hardware worth tens of thousands of pounds, “good enough” doesn’t cut it.
A Startup Stacking the Deck in its Favour
This is the exact problem that a fascinating startup called Alloy Enterprises is trying to solve. As reported by TechCrunch, the company has developed a novel manufacturing technique that could be a game-changer for cooling the next generation of AI hardware. Instead of machining or printing, they use a process called “stack forging.”
Let me try an analogy. Imagine you’re making a perfect mille-feuille pastry. You have incredibly thin layers of pastry and filling, stacked perfectly. Now, imagine doing that with paper-thin sheets of copper. Alloy Enterprises takes these sheets, which can be chemically etched with incredibly intricate patterns for channels, and fuses them together with immense heat and pressure. The result isn’t a stack of layers anymore; it’s a single, solid block of copper with seamless, winding channels flowing through its core.
The performance benefits are, frankly, startling. According to the company, their cold plates show a 35% better thermal performance than competing solutions. The diffusion bonding they use creates a bond so complete that, as CEO Ali Forsyth puts it, “We hit raw material properties. The copper is just as strong as if you had machined it.” They can create features as small as 50 microns—about half the width of a human hair—allowing for incredibly complex and efficient channel designs that perfectly match the hotspots on a chip. This method avoids the porosity of 3D printing and the weak seams of traditional brazing, delivering the best of both worlds.
The Coming Shock to Our Energy Grids
Why does a 35% improvement in cooling efficiency matter so much? Because it has a direct impact on the bigger picture: our energy grids. The U.S. Department of Energy has noted that data centres already account for a significant slice of national electricity consumption, a figure set to grow alarmingly with the AI boom. Every watt of energy saved in a data centre is a watt that doesn’t need to be generated, reducing the strain on our ageing power infrastructure.
This isn’t a theoretical, future problem. It’s happening right now. Nvidia’s next-generation GPU platform, codenamed Rubin, is rumoured to push server rack power requirements to a mind-boggling 600 kilowatts. To put that in perspective, a typical home in the UK might use 8-10 kilowatt-hours per day. A single rack of these new AI accelerators could consume as much power as a small town. When you have hundreds or thousands of these racks in one data centre, you can see how AI infrastructure sustainability becomes less about being green and more about basic operational feasibility.
You simply can’t power these facilities without a radical rethink of energy sourcing and management. Efficient cooling, like the kind Alloy Enterprises is developing, is a critical piece of the puzzle. If you can cool the chips more efficiently, the entire system becomes more efficient. The massive overhead from industrial-scale air conditioning is reduced, and the power can be dedicated to computation instead of just keeping the building from melting. This is how innovations in thermal management connect directly to the stability of our energy grids.
Preparing for the Unavoidable Future
The trajectory is clear. The demand for AI computation is not going to slow down. Therefore, the associated power and cooling challenges will only grow more acute. This will force some hard decisions. Will we see a new wave of data centre construction right next to power plants? Will modular nuclear reactors become a common feature of the tech landscape?
Technologies like stack forging are vital because they tackle the problem at its source. Instead of just throwing more energy at the cooling problem, they make the cooling itself fundamentally more efficient. It’s the difference between buying a bigger air conditioning unit for your house versus properly insulating the walls. One is a temporary fix that increases your bills; the other is a permanent solution that lowers them.
The work being done by companies like Alloy Enterprises underscores a crucial truth about the AI gold rush: the real, long-term fortunes may not be made by the prospectors digging for digital gold, but by the companies selling the picks, shovels, and, in this case, the hyper-efficient cooling systems.
The AI industry is at a crossroads. It can continue down the current path of unrestrained energy consumption and hope for the best, or it can proactively invest in the foundational technologies that enable genuine AI infrastructure sustainability. Better cooling isn’t just a technical upgrade; it’s a strategic imperative for the entire industry.
What moves do you think the major cloud providers—Amazon, Google, Microsoft—should be making right now to address this looming energy crunch? The solutions are emerging, but is the will to implement them there yet?


