It seems OpenAI has a habit of spring cleaning its safety departments. Fresh on the heels of dissolving its much-discussed “superalignment” group in 2024, the company has now dismantled its OpenAI alignment team. One moment you have a team dedicated to ensuring AI aligns with human values, and the next, poof, it’s gone. So, what on earth is going on inside the world’s most-watched AI company? Are they simply rearranging the furniture, or are they quietly taking the brakes off the car whilst hurtling down a very steep hill?
This latest move feels less like a thoughtful restructuring and more like a strategic pivot that raises some serious questions about the company’s priorities. Let’s dig into what this dissolution really means for AI governance and the broader quest for AI trustworthiness.
The Safety Team Vanishes, A ‘Futurist’ Appears
According to a report from TechCrunch, the mission alignment team, a group of about six or seven people formed in September 2024, has been dissolved. Its members have reportedly been reassigned to other teams within the company. This team’s purpose, according to an OpenAI spokesperson, was to act as a “support function to help employees and the public understand our mission and the impact of AI.” A noble goal, certainly.
The team’s leader, Joshua Achiam, isn’t out of a job. Instead, he’s been given a rather grand-sounding new title: Chief Futurist. In this new capacity, he is said to be collaborating with physicist Jason Pruet. What does a Chief Futurist actually do? It sounds more like someone who belongs in a science fiction novel than a corporate structure. It suggests a shift from practical implementation and oversight to blue-sky thinking. One can’t help but wonder if this is a promotion into irrelevance—kicking a key safety advocate upstairs where they can gaze at the stars instead of checking the engine.
This isn’t just an internal reshuffle; it’s a signal. When you dissolve a team whose very name is about alignment and trust, you are sending a powerful message about what the company values most right now. And it doesn’t appear to be cautious oversight.
Déjà Vu: A Pattern of Dissolving Oversight
To understand today’s news, you have to look at OpenAI’s recent history. This isn’t their first rodeo when it comes to disbanding safety initiatives. In 2024, the company made waves by dissolving its “superalignment” team, which was co-led by the company’s co-founder and chief scientist, Ilya Sutskever, before his departure. That team had an ambitious goal: to solve the technical challenges of controlling “superintelligent” AI within four years.
– The Superalignment Team (dissolved 2024): Focused on the long-term, existential risk of AI far surpassing human intelligence.
– The Mission Alignment Team (dissolved 2026): Focused on the more immediate challenge of translating OpenAI’s mission into practice and communicating it.
Both teams, in their own way, acted as internal checks and balances. They were the designated worriers, the people paid to think about what could go wrong. It’s a bit like a car manufacturer first closing its advanced research division for futuristic safety features, and then a couple of years later, getting rid of the department that writes the owner’s manual section on how to use the seatbelts. At what point do you start to question their commitment to safety altogether?
This pattern suggests that dedicated, ring-fenced safety teams might be seen as an impediment to the primary corporate objective: shipping products and scaling the business as quickly as possible.
‘Mission Realignment’ or Mission De-Prioritisation?
OpenAI frames these changes under the umbrella of mission realignment. It’s a tidy piece of corporate language, but what does it really mean? A mission realignment implies a conscious pivot in strategy. In this case, it appears to be a realignment away from centralised, dedicated safety teams and towards a model where safety is supposedly everyone’s job—a distributed responsibility.
On the surface, this sounds sensible. Embedding safety across all teams could be more effective than siloing it. However, the risk is that when safety becomes everyone’s job, it can easily become no one’s priority. Without a dedicated team to champion the cause, raise difficult questions, and hold product teams accountable, safety concerns can get drowned out by the louder, more urgent demands of release schedules and commercial targets.
This move dramatically affects the perception of AI trustworthiness. Trust isn’t built on promises alone; it’s built on visible structures, processes, and a demonstrated commitment to accountability. By dismantling the very team named for alignment, OpenAI is making it harder for the public, regulators, and even its own employees to trust that its mission of “safe and beneficial AGI” is still the North Star.
The Future of AI Governance: A Ticking Clock?
The dissolution of the OpenAI alignment team should be a wake-up call for the entire industry. As AI models become more powerful and integrated into our daily lives, the need for robust AI governance frameworks becomes more critical, not less.
This isn’t just about preventing a sci-fi robot apocalypse. It’s about the near-term risks:
– Bias Amplification: AI systems that perpetuate and scale societal biases.
– Misinformation: The generation of convincing but false content that can sway opinions and disrupt society.
– Economic Disruption: The impact on jobs and the concentration of power in the hands of a few tech giants.
If a market leader like OpenAI is seen to be trimming its safety sails, it sets a dangerous precedent. Other companies, eager to compete, might feel pressured to follow suit, leading to a race to the bottom where safety is sacrificed for speed. As cited by industry watchers like TechCrunch, these internal shifts have external consequences on the entire ecosystem’s approach to responsible development.
So, what should effective AI governance look like? It requires more than just good intentions. It needs independent oversight, transparent reporting, and clear lines of accountability. Perhaps it’s time for external, third-party audits to become the norm, rather than relying solely on internal teams that can be dissolved at a moment’s notice.
Where Do We Go From Here?
OpenAI’s decision to dissolve its OpenAI alignment team is a significant moment. It marks a potential shift from an era of publicly-championed caution to one of commercially-driven acceleration. Giving the former leader the title of Chief Futurist feels like a symbolic gesture, swapping tangible oversight for abstract vision.
The core tension in AI development has always been between moving fast and being careful. Right now, the scales at OpenAI seem to be tipping heavily towards speed. Whilst this might yield impressive new products in the short term, it erodes the very foundation of AI trustworthiness that is essential for long-term success and public acceptance.
This isn’t a settled matter. The story of AI is still being written, and its governance is one of the most important chapters. We, as users, observers, and citizens, must continue to ask the hard questions.
What do you think? Is embedding safety across teams a more effective strategy, or is dismantling dedicated safety teams a red flag? Let me know your thoughts in the comments below.


