So, your company has a shiny new automation pilot. It’s clever, it’s slick, and it saved a department a handful of hours last week. Everyone is patting themselves on the back. Now comes the hard part: scaling it. And according to the numbers, you’re probably going to fail. Miserably.
It’s a brutal truth, but as Greg Holmes from Apptio, now an IBM company, recently pointed out, a staggering 80 percent of new innovation projects simply don’t make it. They wither on the vine long before reaching a full enterprise deployment. But why? Is the technology not good enough? Is the vision lacking? Rarely. The real killer is something far less glamorous: bad maths. Or, more precisely, a complete failure to apply financial rigour.
The Pilot Programme Illusion
Let’s be honest, getting an automation pilot off the ground is the fun part. It’s like building a one-off concept car; you throw resources at it, over-provision the infrastructure because you can, and celebrate the sleek result. The problem? You haven’t built a car; you’ve built a very expensive sculpture that happens to have wheels. You have no idea what it would actually cost to manufacture ten thousand of them.
This is the core crisis of automation scaling. In the controlled environment of a pilot, the true costs are hidden. As Holmes explained in a recent piece for Artificial Intelligence News, once you try to scale, reality bites back hard. “API calls can multiply, exceptions appear at volume… support overheads grow.” Suddenly, your sleek, efficient bot is a ravenous, resource-guzzling monster, and your production overheads are spiralling out of control.
You’re left with a project that costs more to run than the value it creates. The business case evaporates, and the project is quietly shelved, becoming another statistic.
From Reactive Panic to Proactive Planning with FinOps
How do you break this cycle? You stop treating technology spending like a magic black box. You need to bring financial accountability directly into your cloud and automation operations. This is the essence of FinOps.
Think of it like this: for years, developers built things and threw them “over the wall” to the operations team to run. The DevOps movement broke down that wall. FinOps does the same for finance. It integrates financial management into the technology lifecycle from day one. As Holmes puts it, it’s a shift “from being very reactive to proactive”.
Instead of getting a shocking cloud bill at the end of the month and scrambling to find out why, you’re engineering for cost-efficiency from the start. And the single most important metric for this is the cost per transaction.
The Only Number That Really Matters
Do you know how much it costs your business every time a customer logs in, an invoice is processed, or a support ticket is resolved by one of your automated systems? If you don’t, you are flying blind.
The cost per transaction is the unit economic that underpins successful automation scaling. It’s the ground truth. It allows you to perform a meaningful ROI analysis because it connects your infrastructure spend directly to a tangible business action. When you know that number, you can make intelligent decisions. If a new feature doubles the cost per transaction, is the added value worth it? If a process change halves it, you’ve just found a massive efficiency gain.
Without this metric, you’re just guessing.
Building the Guardrails for Growth
Of course, tracking costs is only one part of the equation. You also need to control them. This isn’t about stifling innovation with bureaucracy; it’s about embedding smart governance directly into the tools your developers are already using.
Governance That Isn’t a Bottleneck
Forget endless review boards and manual checks. Modern governance means writing the rules directly into your development platforms like GitHub. You can set automated policies that, for example, prevent a developer from spinning up a wildly expensive server instance for a low-priority task.
This is where technologies like Infrastructure-as-Code (IaC), using tools such as HashiCorp Terraform, become so powerful. IaC allows you to define your entire IT environment in code. By embedding policy enforcement within that code, you ensure that everything deployed is compliant, secure, and cost-optimised by default. It’s like having a building inspector automatically check the blueprints before a single brick is laid.
Seeing the Whole Chessboard with TBM
Finally, to truly succeed at an enterprise deployment, you need a common language that both the tech department and the executive board can understand. This is where the Technology Business Management (TBM) framework comes in.
TBM provides a standard taxonomy to categorise all IT costs, linking them to business capabilities. It helps you answer the big questions. How much are we spending on “running the business” versus “innovating for the future”? What is the true total cost of ownership of that 20-year-old legacy system?
As the team at Apptio has demonstrated, TBM allows you to make strategic trade-offs. As one case cited in the Artificial Intelligence News article highlights, you can decide whether to patch a legacy system or invest in a modern replacement based on a proper financial model, not just a gut feeling. “Just because something’s legacy doesn’t mean you have to retire it,” Holmes wisely notes, but you need the data to justify that decision.
Lessons from the Front Line
This isn’t just theory. Companies are putting this into practice and seeing incredible results.
– Liberty Mutual did a deep dive into its consumption metrics and managed to save around $2.5 million. They achieved this by understanding exactly what they were spending and why, allowing them to optimise their infrastructure and eliminate waste.
– Commonwealth Bank of Australia undertook a massive analysis of 2,000 applications to determine their suitability for scaling. This kind of systematic evaluation is what separates successful large-scale automation from a collection of expensive science projects.
The Future Belongs to the Financially Fit
The era of “move fast and break things” is giving way to “scale smart and prove value”. The next wave of automation and AI will be exponentially more powerful, but also more resource-intensive. Simply throwing compute power at a problem without understanding the unit economics will be a recipe for financial ruin.
Successful automation scaling is not just a technical challenge; it’s a financial one. It demands a culture of cost-consciousness, powered by FinOps. It requires the guardrails of embedded governance and the strategic oversight of a framework like TBM.
The companies that master this financial discipline will be the ones that unlock the true promise of automation, building efficient, scalable, and profitable operations. The rest? They’ll be stuck in pilot mode forever, wondering where it all went wrong.
So, let me ask you: do you know your cost per transaction? And if not, what’s stopping you from finding out?


