It seems we’re being lied to about artificial intelligence. That’s the blunt assessment from Shyam Sankar, the Chief Technology Officer of Palantir, and it’s a difficult point to argue with. The public discourse around AI feels less like a sober, adult conversation and more like a chaotic mix of a Hollywood blockbuster trailer and a doom-laden documentary. We are caught in a pincer movement of hype, trapped between utopian proclamations from Silicon Valley evangelists and dystopian warnings that paint a future devoid of jobs and privacy.
This whirlwind of narratives is creating a dangerously skewed AI public perception. On one side, you have figures like Elon Musk musing that AI could make retirement savings obsolete. On the other, Palantir, a company not exactly known for its cuddly public image, is positioning its CTO to declare that “AI is a tool for the American worker, not his replacement.” This dizzying back-and-forth isn’t just confusing; it’s obscuring the very real AI transparency issues that demand our immediate attention.
What Are We Even Talking About?
When we discuss “transparency” in AI, it’s not simply about open-sourcing code. True transparency is about clarity. It’s about knowing what data an AI was trained on, understanding the logic (or lack thereof) behind its decisions, and being brutally honest about its limitations. Think of it like a food label. You expect to see the ingredients, the nutritional information, and any allergy warnings. With AI, we’re often being served a gourmet meal with no ingredient list, just a promise from the chef that it’s delicious.
Right now, the industry’s approach to ethical AI communication is failing this basic test. Corporations are quick to trumpet their AI-powered successes—Google’s parent company, Alphabet, for instance, has seen its revenue soar, partly on the back of its AI investments. Yet, when things go wrong, a sudden fog descends. This selective transparency is a strategic choice, designed to maximise profit and minimise accountability. The problem is, the cracks are starting to show, and they are not small.
The Hidden Toll of Our AI Obsession
The conversation about the risks of AI often spirals into futuristic debates about superintelligence. Meanwhile, the clear and present dangers are brushed under the carpet. These aren’t hypothetical problems; they are happening right now and they affect us all.
Let’s start with your privacy. The recent data breach at Chat & Ask AI, a seemingly innocuous chatbot application, exposed hundreds of millions of private messages. As detailed in a Fox News AI newsletter, this wasn’t a minor leak; it was the digital equivalent of every whispered secret in a city being broadcast on the evening news. This event, alongside massive breaches like the one at SoundCloud which exposed nearly 30 million user accounts, highlights a terrifying reality: the data we feed these systems is a ticking time bomb.
Then there’s the bill. Not the one for the app subscription, but your electricity bill. Powering these massive AI models takes an astonishing amount of energy. The data centres that are the engine rooms of the AI revolution are energy guzzlers on an unprecedented scale. One stunning forecast suggests that by December 2025, U.S. home electricity costs could be 42% higher than they were a decade earlier, with AI’s thirst for power being a significant factor. When the CEO of Chevron, Mike Wirth, is publicly discussing strategies to generate affordable energy to meet this demand, you know it has transcended the tech world and become a fundamental economic issue.
When Misinformation Turns Malicious
This climate of technology misinformation and poor transparency creates the perfect petri dish for scams. When people don’t understand a technology, they are far more vulnerable to those who would exploit it. The cautionary tales are no longer confined to sci-fi.
Consider the heart-wrenching story of a woman who lost $81,000 and her home to a deepfake romance scam. The scammer used AI-generated voice and video to create a convincing but entirely fake persona, methodically draining her life savings. This isn’t just a sophisticated phishing email; it’s a weaponised form of deception, preying on human emotion with technological precision.
These incidents corrode public trust far more effectively than any academic paper on algorithmic bias. They provide tangible, painful proof that the guardrails are missing. The weak federal regulation surrounding AI means we are effectively in a Wild West environment. Companies can launch powerful tools with little oversight, leaving consumers to navigate a landscape filled with digital snake-oil salesmen and expertly crafted traps. Without robust ethical AI communication from both creators and regulators, the public is left defenceless.
Building Trust in an Age of Scepticism
Is it any wonder, then, that voters are sceptical? The rapid, unregulated adoption of AI is ringing alarm bells for a public that feels it is being left behind. The promise of an AI-powered utopia means very little when your electricity bill is climbing and headlines are filled with stories of devastating scams and colossal data breaches. The current approach is creating a deep and dangerous rift between the tech industry and the society it claims to serve.
So, how do we bridge this gap? The path forward isn’t to halt progress, but to demand better.
– For Corporations: The era of “move fast and break things” is over. It’s time for “build carefully and explain everything.” This means genuine, painful transparency about data sourcing, model limitations, and potential risks. It’s about treating users as partners, not just data sources.
– For Regulators: It is time to wake up. We need intelligent, modern regulation that protects consumers without strangling innovation. This means investing in expertise and creating clear rules of the road for AI development and deployment.
– For the Public: We must become more discerning consumers of technology. We need to ask harder questions and demand clearer answers. Blind trust is no longer an option.
The narrative around AI needs to shift from one of myth-making to one of honest accounting. We require a full and frank discussion about the real costs, the genuine benefits, and the ethical framework needed to manage this powerful technology.
Anything less is, as Sankar put it, simply a lie.
What is the one question you would ask a major tech CEO about their AI strategy if you were guaranteed a completely honest answer?


