It seems there’s a new disconnect brewing in the world of corporate tech, and it smells suspiciously like wishful thinking. In boardrooms across North America, executives are patting themselves on the back, dazzled by the enormous returns from their investments in artificial intelligence. Meanwhile, down in the digital trenches where the actual work gets done, the people operating these supposedly brilliant systems are far from convinced.
This isn’t just a simple difference of opinion. It’s a chasm, a fundamental AI trust gap that highlights a dangerous divergence between strategy and reality. A recent study from Digitate, highlighted by Artificial Intelligence News, lays it out in stark numbers: 61% of C-suite leaders believe their AI is “very trustworthy.” Ask the practitioners—the people whose jobs depend on this tech working flawlessly—and that number plummets to just 46%. When the people pushing the buttons don’t trust the machine, you don’t have a technological revolution; you have an accident waiting to happen.
Executive Bling and the Seduction of ROI
So why the rosy view from the top? To put it bluntly, money. The same report shows North American firms are pocketing a median return on investment of a staggering $175 million from deploying so-called ‘agentic AI’—systems designed to reason and act on their own. When you see a number like that on a slide, it’s easy to believe you’ve found the holy grail of efficiency.
This is classic executive overconfidence. The dashboard looks green, the profits are rolling in, and the strategy appears to be a wild success. But this high-level view obscures the messy reality. It’s like a general planning a battle from a map miles away from the front line. The strategy looks perfect on paper, but it ignores the mud, the equipment failures, and the exhausted soldiers.
The C-suite is incentivised to believe in the AI because it validates their expensive strategic bets. They are sold a vision of flawless automation and a future where IT departments are no longer cost centres but profit engines. As Digitate’s Avi Bhagtani puts it, “Agentic AI is the bridge between human ingenuity and autonomous intelligence that marks the dawn of IT as a profit-driving, strategic capability.” It’s a compelling narrative, but it’s only half the story.
The View from the Engine Room: Practitioner Scepticism
Down in the operational engine room, the story is very different. The practitioners—the engineers, the cybersecurity analysts, the IT ops teams—are the ones who get paged at 3 a.m. when the ‘autonomous’ system has a meltdown. Their practitioner scepticism isn’t ideological; it’s born from daily, frustrating experience.
The Digitate study reveals a fascinating paradox. While the goal of agentic AI is to reduce human involvement, the number one concern for organisations (cited by 47%) is the persistent need for human intervention. Think about that for a moment. The primary selling point of the technology is autonomy, yet its biggest problem is that it isn’t actually autonomous.
This is where the trust gap becomes a serious operational risk. Practitioners see AI not as a reliable colleague but as an unpredictable intern—occasionally brilliant, but just as likely to cause a major incident that a human has to clean up. They are wrestling with AI’s ‘black box’ problem, where systems make decisions that are impossible to explain or audit, a terrifying prospect in a field like cybersecurity.
Governance? Or Just Going Fast and Breaking Things?
This entire situation screams of a monumental governance misalignment. North American enterprises, in their rush to cash in on AI, have prioritised speed and autonomy above all else. They are, in essence, letting the technology run ahead of the policies and oversight needed to manage it safely.
Contrast this with the approach in Europe. While European firms report a similar ROI ($170 million), their focus is markedly different. The report notes they are placing a greater emphasis on governance and data stewardship. This isn’t about being bureaucratic or slow; it’s about building a sustainable foundation. They understand that trust isn’t a feature you can add later. It has to be baked in from the start.
Proper risk assessment models for AI shouldn’t just focus on fiancials. They must also quantify the risk of system failure, the cost of constant human oversight (the ‘cost-human conundrum’), and the long-term damage of deploying technology that your own teams don’t believe in. North America’s current model seems to be all upside, with the downside being delegated to the IT team to worry about.
A Tale of Two Continents: The Race for Autonomy
The case emerging from North American enterprises is a perfect microcosm of this conflict. According to the data cited by Artificial Intelligence News, 78% of organisations are deploying agentic AI in IT operations. They are chasing a future where, by 2030, an estimated 74% of enterprises will achieve semi-to-full autonomy.
This sounds impressive, but it’s a house built on sand if the trust gap persists. The European model offers a different path. By focusing on governance first, they may be moving more deliberately, but they are also building institutional resilience. They are creating a framework where practitioners are part of the process, not just passive users of a tool they mistrust. Which approach do you think will win in the long run_—_the sprinter who goes out too fast, or the marathon runner who paces themselves?
Fixing the Disconnect: How to Actually Build Trust
So, how do organisations bridge this dangerous AI trust gap? It’s not about sending out more memos or holding rah-rah all-hands meetings. The solution requires a fundamental shift in how AI is developed and deployed.
– Radical Transparency and Communication: Stop treating AI capabilities like a state secret. Leaders need to be brutally honest about what the AI can—and, more importantly, cannot—do. If a system requires human oversight, say so. Frame it not as a failure of the AI but as a human-machine partnership.
– Human-Centric AI Development: The era of technologists building tools in isolation and throwing them over the wall to users must end. Involve practitioners from day one. Let their experience and scepticism shape the AI’s development. They are the ones who know the edge cases and the real-world complexities. Building with them is the only way to build their trust.
Ultimately, the goal isn’t just to make AI that works; it’s to make AI that people trust to work. As Bhagtani aptly notes, “As organisations balance autonomy with accountability, those that embed trust, transparency, and human engagement into their AI strategy will shape the future of digital business.”
The executive obsession with AI is understandable given the glittering financial returns. But those returns are fragile if they are built on technology that the frontline staff views with suspicion. Ignoring the practitioner scepticism isn’t just bad for morale; it’s a catastrophic business and security risk. The real question isn’t whether AI can generate profit, but whether organisations have the wisdom to build it responsibly.
What do you think? Is this AI trust gap a temporary growing pain, or a deep-seated flaw in how we are adopting new technology?


