These AI systems, from OpenAI’s ChatGPT to Google’s Gemini, are being leveraged, whether wittingly or not, as conduits for Kremlin disinformation. A new report from the Institute of Strategic Dialogue (ISD), an organisation that tracks extremism and disinformation, has laid bare a deeply uncomfortable truth: our new AI oracles are laundering Russian state propaganda about the war in Ukraine, directly citing sanctioned media outlets. This isn’t a bug; it’s a feature of how these systems are designed, and it represents a formidable new front in the war against AI misinformation.
What Exactly is AI Misinformation?
Before we dive into the geopolitical muck, let’s be clear about what we mean by AI misinformation. This isn’t about an AI simply getting a fact wrong, like misstating the capital of Bolivia. This is about the systematic generation and amplification of deceptive content that is designed to manipulate public opinion. Think of an LLM as a brilliant, lightning-fast research assistant with absolutely no sense of judgment or worldly experience. It has read a vast portion of the internet, but it can’t distinguish between a peer-reviewed scientific paper and a conspiracy theory forum.
The vulnerability lies in their very nature. These models are not “thinking” in the human sense. They are pattern-recognition machines, assembling responses based on statistical probabilities gleaned from their training data. If a certain narrative—no matter how false—appears frequently and coherently across the web, the AI learns that pattern. When prompted, it will reproduce that pattern, presenting it with an air of neutral authority that it simply doesn’t possess. This makes them exceptionally vulnerable to manipulation by actors who are masters at flooding the internet with a consistent, albeit false, message.
Content Moderation: The Sisyphean Task
For years, social media companies have been wrestling with the gargantuan task of content moderation. It’s a messy, imperfect, and often criticised process. Now, imagine that challenge scaled up to infinity. That’s the problem facing AI developers. You can’t pre-moderate the near-infinite number of possible responses an LLM can generate. The moderation has to happen in real-time, a task that is proving to be monumentally difficult.
The ISD’s findings, detailed in a recent WIRED article, are a stark report card on the current state of AI moderation, and frankly, the grades are not good. The researchers found that, across platforms including ChatGPT, Gemini, DeepSeek, and xAI’s Grok, a staggering 18% of responses to questions about the war in Ukraine cited Russian state-attributed sources or entities known to be part of Kremlin disinformation networks.
When asked for comment, the tech giants’ responses were telling. OpenAI, for example, attempted to deflect, suggesting the issue stemmed not from the model itself but from its search functionality, which pulls in real-time information. This is a distinction without a difference. To the end user, it doesn’t matter if the propaganda comes from the model’s core training or a connected search tool; the chatbot is the one serving it up. It’s like a chef claiming he’s not responsible for food poisoning because he didn’t personally grow the contaminated lettuce—he just put it in the salad.
Sanitising Sanctioned Narratives
This isn’t just a technical glitch; it’s the digital laundering of geopolitical narratives. Organisations like RT (Russia Today) and Sputnik have been sanctioned by Western governments for their role as propaganda arms of the Russian state. Their content is supposed to be restricted. Yet, here are our most advanced AI tools, presenting information from these very sources as legitimate answers to user queries.
The ISD study found that chatbots were citing articles from sites like the Sputnik Globe and the Strategic Culture Foundation, a journal sanctioned by the U.S. Treasury for being directed by Russia’s Foreign Intelligence Service. By citing these sources, the AI lends them an unearned veneer of credibility. A sceptical user might dismiss an article from RT, but when the same talking points are presented by the seemingly objective voice of ChatGPT, they carry a different weight. The AI is acting as a reputation-laundering service for state-backed lies.
This problem is compounded by confirmation bias. The study discovered that when users posed “malicious queries”—questions framed with a pro-Kremlin bias—the AI was even more likely to serve up Russian propaganda, doing so 25% of the time. The AI, seeking to provide a “helpful” and relevant answer, mirrors the user’s bias back at them, validating and reinforcing their preconceived notions with sanctioned disinformation.
The Weaponisation of Data Voids
So, how does this propaganda get into the system so effectively? One of the key tactics used by Russian disinformation networks is the exploitation of what researchers call “data voids.”
Imagine a major breaking news event happens—say, an explosion at a critical piece of infrastructure in Ukraine. In the first few minutes and hours, there is a “data void.” Verified information from credible journalists and official sources is scarce as they work to confirm the facts. This is the golden hour for propagandists. They flood the internet, social media, and news aggregators with their version of events, complete with fabricated “evidence,” false attributions of blame, and misleading narratives.
The AI chatbots, particularly those with real-time web access, are hungry for information to fill this void. In their rush to provide up-to-the-minute answers, they slurp up this firehose of disinformation. Because the propagandists were first and loudest, their narrative temporarily dominates the information landscape. The AI learns this false pattern and begins to repeat it. By the time credible sources have published verified accounts, the AI has already incorporated the lie into its knowledge base. The damage is done.
A Regulatory Collision Course and Sanction Evasion
This brings us to a looming collision with regulators, particularly in the European Union. The EU has some of the world’s most robust rules designed to curb disinformation and enforce sanctions. The Digital Services Act (DSA) places strict obligations on what it designates as Very Large Online Platforms (VLOPs)—those with more than 45 million monthly active users in the EU.
According to its own data, OpenAI’s ChatGPT had approximately 120.4 million average monthly active recipients in the EU as of early 2024. This places it squarely in the territory of a VLOP, subject to the highest level of scrutiny. A key part of EU law is the comprehensive sanction regime against Russian entities, which includes a ban on broadcasting content from outlets like RT and Sputnik.
So, here’s the billion-dollar question: When an AI platform, operating in the EU, presents content and cites sources from sanctioned Russian entities, is it violating EU law? Is this a form of indirect sanction evasion? Legal experts are starting to murmur that it might be. The platforms can claim they aren’t “broadcasting” in the traditional sense, but if their service actively distributes and legitimises sanctioned content to over 100 million Europeans, regulators in Brussels are unlikely to be impressed by semantic arguments. This isn’t just a PR problem for Big Tech; it’s a budding legal and compliance nightmare.
Can We Even Fix This?
There is no simple software patch for propaganda. Combating AI misinformation requires a multi-pronged approach that goes far beyond tweaking algorithms.
* Better Source Weighting: AI developers must get more serious about teaching their models to distinguish between authoritative sources and known propaganda outlets. A report from the BBC should not be given the same weight as an article from a sanctioned intelligence front.
* Transparency and Citation: When an AI does pull information from the web, it must be radically transparent about its sources. Instead of a slickly written paragraph, perhaps it should present information more like a research dossier, clearly flagging the origin and potential bias of each piece of data.
* Addressing Confirmation Bias: Models need to be trained to challenge, rather than confirm, biased queries. If a user asks a leading question, the AI should be designed to provide a more neutral, balanced perspective with a diversity of sources, rather than simply parroting back the user’s slant.
* User Education: Ultimately, the most powerful defence is a sceptical and educated user base. We need to move past the novelty phase and start treating AI-generated text with the same critical eye we would apply to any other unverified source on the internet. We must teach ourselves, and the next generation, that “the AI said so” is not a reliable statement of fact.
This is a new and treacherous landscape. The very architecture of today’s LLMs makes them susceptible to being used as unwitting weapons in a geopolitical information war. The findings from the Institute of Strategic Dialogue are not a forecast of a future problem; they are a snapshot of a crisis that is already here. The challenge of content moderation has evolved, and the regulatory frameworks are struggling to keep pace with a technology that is, in some ways, actively undermining them.
The race is now on. Can the developers and regulators build guardrails faster than state actors can exploit the flaws? Or will these powerful tools continue to be a vector for lies, eroding trust and distorting reality?
What do you think is the most effective way to hold these AI companies accountable for the information they spread? Let me know your thoughts in the comments below.


