So, you want to build an AI powerhouse? Good luck getting a shovel in the ground before 2027. The typical timeline for a greenfield data centre project, from acquiring land and permits to flicking the switch, is a leisurely two to three years. In the world of AI, that’s an eternity. That’s several generations of models come and gone. The challenge is clear: we need scalable AI systems, and we needed them yesterday. This isn’t just about software; it’s a problem of steel, concrete, and high-voltage power lines. And it’s forcing some fascinatingly creative thinking.
What Makes an AI Data Centre Tick? (Hint: It’s Not Just Servers)
First, let’s get one thing straight. An AI data centre is not your grandfather’s server room. The sleepy hum of old-school data centres, with server cabinets sipping a gentle 1-2 kilowatts (kW) of power, is a distant memory. Today’s AI workloads, powered by racks of power-hungry GPUs like Nvidia’s latest silicon or Google’s TPUs, are a different beast entirely. We are talking about densities of 50kW, 100kW, or even 140kW per cabinet.
Think of it this way: a traditional server cabinet was like a single family home’s electrical system. An AI cabinet, however, is more like the substation for an entire neighbourhood. The sheer power density creates an enormous amount of heat, which brings us to the second, and perhaps most crucial, differentiator: cooling. Air conditioning, the workhorse of legacy data centres, simply can’t keep up. When you concentrate that much power, you need to get the heat out fast, which means embracing liquid cooling. The AI data center setup has become an exercise in extreme thermodynamics as much as it is an IT project.
This creates a fundamental strategic dilemma for any company going all-in on AI:
– Build New: The ‘pure’ approach. You get exactly what you want, but it costs a fortune and takes years you don’t have. By the time it’s ready, your technology plan might already be obsolete.
– Co-locate: Renting space in an existing facility. This can be faster, but finding a provider with the requisite power and liquid cooling infrastructure for high-density AI is difficult, expensive, and often involves long waiting lists.
But what if there was a third way? A faster, more agile path that blends the customisation of a new build with the speed of using something that already exists?
From Printing Presses to Processing Power: The Patmos Case
Enter Patmos Hosting and a rather brilliant piece of industrial opportunism. As detailed in a recent Network World report, the company managed to stand up a powerful, 5-megawatt AI data centre in just 90 days. How? By not building one at all. Instead, they took over the former printing press facility for the Kansas City Star newspaper.
This is what’s known as a “brownfield” conversion, and it’s a stroke of genius. Think about what a massive industrial printing press requires: huge amounts of reliable power, cavernous and structurally sound open spaces, and connections to the outside world. As it turns out, those are the exact same ingredients needed for a modern data centre. The Patmos team saw a decommissioned factory and recognised the bones of a digital powerhouse.
Joe Morgan, the COO of Patmos, put the time savings in stark terms. Had they started from a patch of dirt, he mused, “We’d probably still be [getting the permits]”. Instead, by leveraging the newspaper building’s existing assets—including dual power feeds, chilled water systems, and fibre optic connectivity—they leapfrogged years of construction and bureaucracy. This is the very definition of cost efficiency, not just in terms of capital expenditure, but in the far more valuable currency of market speed.
Keeping Your Cool When the Chips are Hot
The real secret sauce in the Patmos project, however, lies in how they tackled the heat. Packing high-performance Nvidia Grace Blackwell GPUs into racks creates a thermal challenge of epic proportions. Their solution was an elegant and robust multi-loop liquid cooling system.
This might sound complex, but the concept is beautifully simple. Imagine the cooling system in your car. You have a closed loop of coolant circulating through your engine to absorb heat. This hot coolant then goes to the radiator, where a separate system—air blowing across the radiator’s fins—removes the heat and disperses it. The two systems work together but never mix.
Patmos applied the same principle, but with five distinct, separated loops.
1. Municipal city water comes into the facility.
2. This water cools a chiller plant but never touches any other equipment.
3. The chiller plant cools a separate, contained loop of water for the data centre building.
4. This building loop cools another set of pipes that run directly to the server racks.
5. Finally, a specialised fluid inside that last set of pipes circulates directly to the processors, drawing heat away at the source.
This meticulous separation is critical. It ensures that a leak in one part of the system doesn’t cause a catastrophic failure elsewhere. The chlorinated city water never gets near the multi-million-dollar processors, and the specialised cooling fluid for the chips remains in its own pristine, closed loop. It’s a sophisticated piece of plumbing that enables the entire AI data center setup to function, allowing for the extreme density that modern scalable AI systems demand.
Best Practices for the 90-Day Data Centre
The Patmos story isn’t just a one-off curiosity; it’s a repeatable blueprint for anyone looking to accelerate their AI infrastructure plans. The success of this brownfield conversion offers a clear set of best practices for a rapid AI data center setup.
Look for the Bones
The most important lesson is to think like a real estate scout with an engineer’s eye. The world is full of decommissioned industrial sites—old factories, warehouses, and yes, printing plants—that are sitting on a goldmine of pre-installed infrastructure.
The checklist for a promising site should include:
– Power: Does it have high-capacity, redundant power feeds already in place? Upgrading electrical infrastructure is one of the most time-consuming and expensive parts of a new build.
– Connectivity: Is the site already served by fibre optic cables? Laying new fibre can be a slow and complex process.
– Structure: Does it have high ceilings and strong floors capable of supporting heavy equipment and the extensive plumbing required for liquid cooling?
– Water: Is there access to a sufficient water supply for the chiller plants?
Finding a site that ticks these boxes dramatically shortens the project timeline and reduces upfront costs, delivering immense cost efficiency.
Design for Modularity and Scale
Patmos isn’t stopping at 5 megawatts. The company is already planning to add another 5 MW of capacity within the next 90-180 days. This highlights a critical principle: modularity. Their cloud infrastructure isn’t a monolithic entity; it’s a series of building blocks. This approach allows them to expand capacity incrementally as demand grows, without having to over-provision from day one.
This modular mindset must extend to every part of the design, from power distribution to cooling and rack layouts. Efficient TPU deployment (or GPU deployment) relies on a standardised, repeatable pod design that can be cookie-cuttered across the facility. This makes installation faster, maintenance easier, and future expansion far more predictable. Ultimately, this is what enables the creation of truly scalable AI systems.
The Future is Brown
What Patmos Hosting has demonstrated is more than just a clever hack. It is a strategic response to the physical constraints of the AI revolution. As the demand for computing continues to explode, the ability to rapidly convert and repurpose existing industrial infrastructure will become a significant competitive advantage.
The insights from Patmos’s 90-day deployment serve as a roadmap for the industry. The future of data centre construction, at least in the short to medium term, might look a lot less like pristine green fields and a lot more like dusty, forgotten factories being given a second life. We are entering an era of industrial alchemy, turning the relics of the 20th-century economy into the engines of the 21st.
This shift will reward those who can move fast, think creatively, and see potential where others only see decay. The AI gold rush isn’t just about writing clever code; it’s about having the vision to see a future data centre in the shell of a past-era factory.
What other types of old infrastructure do you think are ripe for this kind of digital transformation? Are we about to see a run on abandoned shopping centres and power stations? The race is on, and the most creative players are already looking for their next conversion opportunity.


