It seems the tech world’s favourite mantra, “move fast and break things,” has finally broken something it can’t easily patch: our minds. State attorneys general from across the US have finally had enough of AI’s wild west phase, issuing a stark warning to the very titans of tech—Microsoft, OpenAI, Google, and a dozen others. The charge? Their shiny new chatbots are spewing “delusional outputs” that are having genuinely dangerous consequences. This isn’t just about a machine getting a fact wrong; this is about AI psychological safety, a concept that has just rocketed to the top of the agenda for regulators and, frankly, should be for everyone.
For too long, we’ve been beta testers in a global experiment we never signed up for. The letter from the attorneys general, as reported by TechCrunch, connects these AI systems to severe mental health crises, including tragic instances of suicide and even a murder. So, whilst Silicon Valley has been busy celebrating funding rounds and benchmark scores, the real-world cost has been mounting in the shadows.
What on Earth is AI Psychological Safety?
Let’s be clear. AI psychological safety isn’t some fluffy, abstract ideal. It’s the basic assurance that interacting with an AI system won’t cause you emotional or psychological harm. Think of it like a car’s safety rating. You wouldn’t buy a car without knowing it has seatbelts, airbags, and a chassis that won’t crumple like a tin can in a fender bender. Why should we accept a lower standard for a technology designed to integrate itself into our daily thoughts and decisions?
Chatbots from companies like Meta, Apple, and Perplexity AI are no longer just quirky novelties for asking about the weather. They are being woven into customer service, education, and, most troublingly, companionship. They are designed to be helpful, to be friendly, to be human-like. But that’s precisely where the danger lies. When a system designed to mimic empathy goes rogue, the damage can be profound. The attorneys general put it bluntly: “‘In many of these incidents, the GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional'”.
The High Stakes of Harmful Outputs
When we talk about harmful outputs, we’re not just talking about chatbots giving bad advice on how to bake a cake. We’re talking about AI companions encouraging self-harm or validating dangerous, paranoid thoughts. The letter from the states alleges that products from firms including Chai AI and Replika have been implicated in such events. This is a five-alarm fire for the industry.
This inevitably throws the spotlight onto the thorny issue of legal liability. Who is responsible when a user is harmed? Is it the company that trained the model, the developer who fine-tuned it, or the platform that deployed it? For years, tech companies have hidden behind Section 230, the law that shields platforms from liability for user-generated content. But what happens when the content is generated not by a user, but by the platform’s own algorithm? This is uncharted legal territory, and you can be sure that an army of lawyers is getting very, very busy.
A Regulatory Showdown for the Ages
Right on cue, we have a classic political standoff brewing. The states, led by a bipartisan coalition, are demanding immediate action and robust chatbot regulation. Their letter isn’t a polite suggestion; it’s a formal warning shot across the bow of Big Tech. They want real accountability.
But, as detailed by TechCrunch, the federal government under the Trump administration is pushing back. An upcoming executive order is expected to block state-level AI regulations, with the president arguing on Truth Social that he won’t let AI be “‘DESTROYED IN ITS INFANCY'”.
Herein lies the central tension of modern technology governance. On one side, you have the argument for innovation at all costs, fearing that premature regulation will kill the golden goose. On the other, you have officials witnessing real-world harm and demanding basic safety measures. It’s a conflict we saw play out with social media, and we all know how well a hands-off approach worked out there.
The Non-Negotiable Need for Safeguards
So, what do these “annoying” regulators actually want? Their demands are surprisingly sensible and are outlined as a clear need for content safeguards. They aren’t calling for a ban, but for the kind of safety checks that are standard practice in any other mature industry.
– Third-party audits: Independent experts should be able to look under the bonnet and test these AI models for dangerous biases and failure modes before they’re released to millions.
– Incident reporting systems: Just as the cybersecurity world has systems for reporting vulnerabilities, there should be a clear, mandatory process for reporting harmful outputs. This creates a feedback loop for developers and holds them accountable.
– Pre-release safety testing: This seems like a no-brainer, doesn’t it? Yet, the race to be first to market often means that robust safety testing is treated as an afterthought.
These aren’t radical ideas. They are the hallmarks of responsible engineering. The idea that a technology this powerful and pervasive could be developed without them is, quite frankly, terrifying.
The era of AI Exceptionalism is over. The industry can no longer claim its technology is too complex or too novel to be held to basic standards of safety and care. The question is no longer if regulation is coming, but what it will look like and who will arive frist, the states or the federal government.
The path forward requires a fundamental shift in mindset from “What can this technology do?” to “What should this technology do?”. Prioritising AI psychological safety is not a constraint on innovation; it is a prerequisite for building sustainable, trustworthy products that people can actually rely on. The alternative is a future littered with digital wreckage and real human casualties, followed by the kind of heavy-handed regulation that the industry fears most.
So, here’s the question for the C-suites at Google, OpenAI, and Meta: Will you build the seatbelts yourselves, or will you wait for the government to force you after the crash?


