On February 27, Sam Altman told staff that OpenAI shares Anthropic's "red lines" on military AI — and that OpenAI is actively seeking a deal with the Department of Defense. The same day, OpenAI closed $110 billion in funding at a $730 billion valuation — the largest private round in history. And Dario Amodei published a letter saying Anthropic cannot "in good conscience" accede to the Pentagon's demands. One company holds the red lines and faces the Defense Production Act. The other claims them and is worth $730 billion.
January 13, 2024
The date matters because it is the beginning. On that day, The Intercept reported that OpenAI had quietly removed a ban on "military and warfare" use from its usage policy. The company said the rewrite was to make the policy "clearer and more readable." Four days later, a VP told Bloomberg that OpenAI was already developing tools with the Department of Defense.
In January 2024, OpenAI was valued at roughly $80 billion. The ban removal was not announced. It was discovered by reporters reading updated terms of service. But it was the first in a sequence that traces a direct line to February 27, 2026.
The Sequence
Four months after the ban was lifted, OpenAI's entire Superalignment team — the group focused on existential AI dangers — dissolved. Co-lead Jan Leike resigned, writing that "safety culture and processes have taken a backseat to shiny products." By September 2024, the valuation had reached $150 billion.
In February 2025, Google dropped language from its own AI Principles that excluded weapons and surveillance applications. The industry was following OpenAI's lead. By that month, SoftBank had invested $40 billion in OpenAI at a $260 billion valuation.
In July 2025, the Department of Defense formally brought OpenAI into its framework for AI deployment. By August, the valuation reached $300 billion. By December, $500 billion.
Then February 2026. On the 12th, OpenAI announced military access to ChatGPT through the Pentagon's GenAI.mil platform. The same day, Platformer reported that OpenAI had disbanded its mission alignment team — the second safety-focused team to disappear in under two years. Two weeks later, the valuation hit $730 billion.
The constraint removals did not cause the 27x growth. ChatGPT's traction, the AI spending boom, and SoftBank's appetite for scale did. But each removal cleared a path the previous valuation couldn't have taken. The company that banned military use at $27 billion is seeking a DOD contract at $730 billion. The CEO who presided over both is now claiming shared red lines with the company being punished for holding them.
The Gap
Altman's memo specified what OpenAI would not do: lethal autonomous weapons, mass surveillance, domestic targeting. These are real exclusions. But they are not what triggered the Defense Production Act.
Anthropic's red lines — the ones that escalated the standoff — involved unfettered military access to Claude for scenarios including hypothetical nuclear attacks. The Pentagon wanted Claude to reason through those scenarios without guardrails. Anthropic refused. Researchers published what unfettered reasoning produces: nuclear weapons deployed in 95% of simulations.
The Pentagon says it offered compromises. Anthropic says the new contract language made "virtually no progress." Amodei's letter did not hedge: Anthropic will "work to ensure a smooth transition" if the government removes it from the defense supply chain. He is offering to lose the business rather than cross the line.
Altman excluded categories that OpenAI, a software company, was unlikely to enter regardless. Amodei refused categories the Pentagon specifically demanded. The word "shared" does not describe these two positions.
The red lines Altman claims to share are the ones that cost nothing to hold. The red lines Anthropic holds are the ones that cost everything.
Three Postures
On the same day as Altman's memo, Bloomberg reported that two coalitions of workers — including employees of Amazon, Google, Microsoft, and OpenAI itself — asked their companies to join Anthropic in refusing the Pentagon's demands. OpenAI's own staff signed a letter asking the company to do what its CEO says it already does.
Also on February 27, the Wall Street Journal reported that multiple federal agencies had raised safety concerns about xAI's Grok before the Pentagon approved it for classified settings. This is the other end of the spectrum: what happens when a company accepts the Pentagon's "all lawful use" standard without red lines of any kind.
Hold the lines: DPA threat. Claim the lines: $730 billion. Reject the lines: safety warnings in classified systems. The employees who signed the letter understand that the first and second are not the same thing.
The Number
Amazon will invest $15 billion initially in OpenAI, with another $35 billion conditional on milestones. SoftBank's cumulative investment approaches $70 billion. The total raise — $110 billion — exceeds everything Anthropic has raised in its entire existence.
Three years ago, OpenAI was worth $27 billion and had a policy against military use. Today it is worth $730 billion and has a memo about shared red lines. The policy was a constraint. The memo is a claim. And the 27x between those two valuations is the market's answer to what red lines are worth — not as principles, but as positioning.