Words matter, don’t they? Especially when they’re quietly scrubbed from the mission statement of a company building what might be the most consequential technology in human history. OpenAI, the creators of ChatGPT and Sora, recently performed a subtle but telling edit. They deleted the word ‘safely’ from their core mission. A small change on a webpage, perhaps, but it speaks volumes about the direction of the company and the immense challenge of AI safety governance.
Originally, back in 2023, the mission was to ensure artificial general intelligence, or AGI, “safely benefits humanity, unconstrained by a need to generate financial return.” Fast forward to today, and the mission is simply “to ensure that artificial general intelligence benefits all of humanity.” See what’s missing? That pesky little adverb, ‘safely,’ has vanished, and the constraint about financial returns? Gone. This isn’t just semantics; it’s a seismic shift in philosophy.
So, What is AI Safety Governance Anyway?
Before we dive into the corporate drama, let’s be clear about what we’re discussing. AI safety governance isn’t about assembling a toothless ethics board to rubber-stamp decisions. It’s the fundamental rulebook, the corporate constitution, that dictates how a company balances its race for progress with its duty to prevent catastrophic outcomes. Think of it like the regulatory framework for a nuclear power plant. You wouldn’t want the company running the plant to have a mission statement that quietly dropped the word ‘safely’ just because it was chasing higher profits, would you?
An AI ethics framework should be the guardrails that keep the runaway train of innovation on the track. For a long time, we were led to believe that OpenAI was building the most robust guardrails in the industry. It seems now they’re more interested in the speed of the train.
OpenAI’s New Clothes: The For-Profit Emperor
The OpenAI mission change didn’t happen in a vacuum. It’s the predictable, almost cliché, outcome of a journey from a noble non-profit to a capital-hungry corporation. This isn’t your typical start-up story; it’s a grand experiment in corporate responsibility that appears to be going sideways.
The Siren Call of Big Money
OpenAI’s new structure is a confusing hybrid beast. As detailed in a sharp analysis by The Conversation, the company is now a tangled web of a non-profit foundation (the OpenAI Foundation) and a for-profit public benefit corporation (the OpenAI Group). This restructure was supposedly designed to allow it to raise the colossal sums of money needed to build AGI. And raise money it has.
Microsoft has ploughed in over $13 billion for a 27% stake. SoftBank has reportedly invested a staggering $41 billion. With a valuation projected to rocket past $500 billion by 2026, the pressure to deliver financial returns is immense. When you have investors of that calibre, their voices tend to get very loud in the boardroom. The notion that the original mission could remain “unconstrained by a need to generate financial return” was, in hindsight, beautifully naive. The age-old conflict of shareholder vs society has arrived at AI’s front door, and it looks like the shareholders are winning.
A Governance Model Built on Hope
So, who is actually in charge? This is where the governance model becomes less of a safeguard and more of a test of faith. The non-profit foundation—the supposed guardian of humanity’s interests—holds only a 26% ownership stake. Microsoft, a key shareholder with its own massive commercial interests in AI, holds 27%. The remaining 47% is with employees and other investors.
Do the maths. In a showdown, the non-profit doesn’t have the controlling vote. Proponents will argue that the for-profit arm is a “capped-profit” entity, with returns for investors limited. But the cap is reportedly 100 times the investment. That’s not a cap; it’s a rocket ship with a ceiling somewhere in the stratosphere.
The overlapping boards and complex agreements approved by regulators in California and Delaware look more like an attempt to placate legal concerns than to create a genuinely accountable structure. When push comes to shove, and a decision must be made between deploying a potentially risky but wildly profitable model or holding back for more safety checks, where do you think the pressure will lie?
Are There Better Ways to Build the Future?
It didn’t have to be this way. There are alternative models for managing entities with a profound public mission. The structure of Health Net, a US health insurer, for instance, requires its non-profit parent to hold an 80% controlling stake, ensuring the mission always outweighs profit motives. Even The Philadelphia Inquirer newspaper is owned by a non-profit institute, insulating its journalism from purely commercial pressures.
OpenAI could have chosen a path that truly ring-fenced its mission. Instead, it chose a structure that invites conflict and prioritises fundraising. The very core of AI safety governance is about creating incentives for safe behaviour. OpenAI’s current structure creates a powerful incentive for speed and profit. As another insightful piece from The Conversation points out, this hybrid model is now a live test for whether AI will truly serve society or just its financial backers.
The Word Is Gone, But The Risk Is Not
So, where does this leave us? We are hurtling towards AGI, guided by a company that has publicly, if quietly, demoted safety in its list of priorities. The very entity that once positioned itself as the responsible steward of AI’s future now looks suspiciously like every other tech giant scrambling for market dominance.
The deletion of a single word may seem trivial, but it signals a fundamental shift in the culture and priorities at the heart of the AI revolution. It sets a dangerous precedent for others to follow. After all, if the ‘good guys’ at OpenAI don’t need to explicitly commit to safety, why should anyone else?
This isn’t just a debate for tech insiders and ethicists. The outcome of this battle over governance will affect everyone. The question we all need to be asking is no longer just ‘what can this technology do?’, but ‘who is in control, and what do they truly value?’
What do you think? Is it possible to build world-changing AI that ‘benefits humanity’ if ‘safely’ is no longer part of the conversation?


