On February 12, Platformer reported that OpenAI had disbanded its mission alignment team in recent weeks, transferring its employees and giving team lead Joshua Achiam a new title: "chief futurist." This is the second time in two years that OpenAI has dissolved an internal team dedicated to ensuring its AI systems are safe. The first time, in May 2024, it was a crisis. This time, it was a footnote.

May 2024

In July 2023, OpenAI created Superalignment — a team co-led by chief scientist Ilya Sutskever, dedicated to developing ways to "steer and control" superintelligent AI systems. The company pledged 20% of its compute. It was OpenAI's most visible commitment to the idea that the technology it was building might be dangerous and that danger required dedicated resources to address.

Ten months later, the team was gone. In April 2024, two safety researchers were fired for allegedly leaking, including Leopold Aschenbrenner, a Sutskever ally. In May, Sutskever himself left. Then co-lead Jan Leike resigned and published a thread that became the most cited internal critique of any AI lab: "Safety culture and processes have taken a backseat to shiny products." By May 19, Wired reported that the entire Superalignment team had either resigned or been absorbed. In July, the head of OpenAI's Preparedness team was reassigned to reasoning research.

The response was seismic. Leike's thread was quoted in news outlets for weeks. Researchers across the AI safety community called it a betrayal. It fed directly into the broader narrative that OpenAI had abandoned its founding mission in favor of commercial acceleration. Congressional offices cited it. The EU referenced it. It mattered.

February 2026

OpenAI created a replacement: mission alignment, led by Joshua Achiam, carrying the same mandate with less fanfare. Sometime in recent weeks, that team was also disbanded. Employees were transferred. Achiam was given a new title.

February 2026
Source: OpenAI disbanded its mission alignment team in recent weeks and transferred its employees; team lead Joshua Achiam will take on a "chief futurist" role
Platformer

Chief futurist. Not a resignation. Not a devastating public thread. Not a crisis. A title change. The person whose job was to ensure AI models align with human values accepted a new role — one that sounds like a keynote speaker at a corporate retreat. The team vanished without protest. The story appeared in a newsletter, not on the front page of Wired.

OpenAI did the same thing twice. Formed a safety team, publicized it, dissolved it. The pattern is identical. What changed between 2024 and 2026 is everything around the pattern.

What Changed

Three things happened on the same day as the mission alignment disbanding that explain why nobody flinched.

First: Reuters reported that the Pentagon is pushing OpenAI, Anthropic, and others to make their AI tools available on classified networks — "without the standard user restrictions." The military doesn't want guardrails. It wants capability. The demand follows a two-year arc: OpenAI quietly removed its ban on military use in January 2024, partnered with Anduril for defense AI in December 2024, partnered with Leidos for federal agencies in January 2026, and gave the US military ChatGPT access this week. From "no military use" to "classified networks without restrictions" in twenty-five months. When the Pentagon is your client, an internal safety team isn't an asset. It's friction.

Second: Anthropic donated $20 million to Public First, a super PAC pushing for AI guardrails and transparency — explicitly positioned against OpenAI-backed PACs. The opponent: Leading the Future, a pro-AI super PAC launched by a16z and OpenAI's Greg Brockman in August 2025, which has raised $125 million to shape the midterms and prevent restrictive AI regulation.

raised by OpenAI-aligned PAC
Anthropic's counter-PAC

The alignment question is now a campaign finance fight. Two AI companies fund opposing super PACs. The question "how do we make AI safe?" is answered not by research teams but by political strategists, advertising buys, and midterm election spending. In that environment, an internal safety team at one company is irrelevant — the fight has moved to a venue where headcount doesn't matter and dollars do.

Third: Bloomberg reported that some of the biggest VC firms are now backing both OpenAI and Anthropic simultaneously — "breaking a taboo by investing in competing startups." The firms aren't just hedging on products. They're hedging on philosophies. If unrestricted AI wins — Pentagon contracts, no guardrails, maximum velocity — OpenAI is the bet. If regulation wins — mandatory standards, transparency requirements — Anthropic is the bet. By funding both, VCs ensure returns regardless of which regime prevails.

The Normalization

In 2024, when Jan Leike wrote "safety has taken a backseat to shiny products," it was an accusation. It carried the force of someone who believed the company could be different and was mourning the discovery that it wouldn't be. His thread was the language of betrayal — specific, personal, angry.

In 2026, the mission alignment lead accepted a title change. No public letter. No devastating thread. No mourning. The second disbanding didn't produce dissent because there was no expectation left to violate. The first disbanding was a broken promise. The second was a fulfilled prediction.

This is what normalization looks like in practice. The same action — dissolving an AI safety team — produces completely different responses depending on whether anyone still expects the company to maintain one. OpenAI has taught the market, the press, and its own employees that safety teams are temporary structures, created when public pressure demands them and dissolved when commercial priorities override them. The lesson has been absorbed so thoroughly that the second dissolution barely registered.

And the environment now reinforces the lesson from every direction. The Pentagon demands unrestricted models. PACs spend $145 million fighting over regulations that haven't been written. VCs fund both sides. The safety question has been decomposed — split among military procurement officers, campaign strategists, and asset allocators — and no single actor holds enough of it to reassemble the whole.

Somewhere in OpenAI's org chart, a man with the title "chief futurist" still thinks about alignment. His team is gone. The budget is gone. The mandate is now a title. But the title is perfect, in its way. The future is exactly what he's been given permission to think about — as long as he doesn't try to change it.