The latest salvo comes from former President Donald Trump, who recently signed an executive order aimed at preventing states from creating their own AI rulebooks. His argument, echoed by many in Silicon Valley, is simple: a chaotic patchwork of 50 different regulatory regimes would cripple development. As Trump put it, “We want to have one central source of approval.” But is a single, top-down approach the right one for a technology this complex and impactful? Or does it risk creating a regulatory monoculture that stifles both safety and genuine innovation?
What Exactly Is AI Federalism?
At its core, AI federalism is about who gets to write the rules for AI. Should Washington D.C. hold the pen, or should individual states like California, Colorado, and New York be allowed to chart their own courses? The central mechanism in this conflict is a legal concept known as regulatory preemption.
Think of it like this: regulatory preemption is the federal government playing its trump card. When Congress or a federal agency passes a law or a rule, it can include a provision that explicitly overrides, or preempts, any conflicting state or local laws. It’s the ultimate ‘we’ll take it from here’ move, designed to create a single, unified standard across the country. And in the world of AI, the debate over whether to play that card is becoming fiercely contested.
The Case for a Single Rulebook
Let’s be honest, the tech industry’s nightmare is policy fragmentation. Imagine you’re a company like OpenAI or Google, and you’ve just spent a fortune developing a new AI model. Now, imagine having to tweak that model to comply with 50 different, and potentially contradictory, sets of rules on data privacy, bias detection, and transparency. One state might demand full source code access, while another bans certain types of data collection altogether.
This regulatory maze, supporters of a federal standard argue, would be unworkable. As noted by the BBC, tech industry advocates and figures like venture capitalist David Sacks contend that this fragmentation could bog down innovation, making it impossible for US companies to compete with their counterparts in China, who operate under a single, state-directed strategy. Michael Goodyear, a lecturer and legal expert, summed up the pro-federal argument neatly: “It would be better to have one federal law than a bunch of conflicting state laws.”
For big tech, a single federal law, even a tough one, is often preferable to the chaos of a 50-front war. It provides certainty, predictability, and a more streamlined path to market. The subtext here, of course, is that large companies are also much better equipped to lobby and influence a single federal body than they are 50 different state legislatures.
The States Push Back
This drive for federal control has, unsurprisingly, been met with a wall of resistance from state leaders and advocacy groups. They argue that regulatory preemption isn’t about efficiency; it’s about silencing local concerns and gutting essential protections. States have historically been the laboratories of democracy, pioneering everything from environmental protections to consumer rights long before the federal government acted.
According to a report from the National Conference of State Legislatures, the scale of state-level activity is staggering. More than 1,000 separate AI bills have been introduced across the US, and as of this year, 38 states have already enacted around 100 new AI regulations. These aren’t just trivial tweaks; they cover critical areas like ensuring child safety online, protecting intellectual property, and combating algorithmic bias in hiring.
Julie Scelfo, founder of Mothers Against Media Addiction, captured the opposition’s view perfectly, stating that “Stripping states from enacting their own AI safeguards undermines states’ basic rights.” Leaders like California Governor Gavin Newsom have accused Trump of simply doing the bidding of his tech allies, ignoring the specific needs and values of individual states. These brewing constitutional tensions highlight a fundamental disagreement over whether AI is a purely economic issue of innovation or a social one requiring localised safeguards. Are we prioritising speed over safety?
The Balancing Act: Innovation vs. Safeguards
This brings us to the heart of the matter. The debate over AI federalism is a delicate balancing act. On one side, there’s the undeniable need for American companies to innovate rapidly. A fragmented regulatory landscape genuinely could create a competitive disadvantage.
On the other side, however, are the very real risks posed by unchecked AI. We’re talking about AI systems that make decisions about loan applications, medical diagnoses, and job interviews. A one-size-fits-all federal rule might fail to address the unique demographic and economic realities of a state like Mississippi compared to Massachusetts. Granting states state innovation rights in regulation allows them to tailor protections to their specific populations, particularly the most vulnerable.
Could a hybrid approach work? Perhaps a federal baseline that sets minimum standards for safety and transparency, while still allowing states to enact stronger protections if they choose. This would prevent a race to the bottom while still fostering local experimentation. Yet, this model still leaves tech companies grappling with a degree of policy fragmentation, which is precisely what they want to avoid.
What Happens Next?
The direction of AI federalism in the coming years will be one of the most consequential stories in technology policy. The outcome of the presidential election could dramatically shift the balance of power. A continuation of the push for federal preemption could centralise control in Washington, making it easier for large corporations to deploy AI at scale but potentially harder for states to protect their citizens from specific harms.
Conversely, if the pushback from states gains momentum, we could see a future defined by a handful of influential state-level regulatory blocs—led by places like California and the EU—effectively setting de facto national or even global standards. This is a pattern we’ve already seen with emissions standards and data privacy.
The contest over AI governance is far from over. It is a complex dance between efficiency, safety, innovation, and democratic accountability. As this technology weaves itself ever deeper into the fabric of our lives, the question of who holds the regulatory reins will only become more critical.
What do you think? Should AI be governed by a single federal standard, or should states retain the power to write their own rules? Let me know in the comments below.


