On March 4, 2026, The Information published a leaked Friday memo in which Dario Amodei called OpenAI's DOD deal "safety theater." The same day, Reuters reported that some of Anthropic's own investors were pushing the company to de-escalate its standoff with the Pentagon. Sam Altman defended the deal at an OpenAI all-hands, telling staff the backlash was overblown — then an OpenAI spokesperson said Altman had "misspoke" about the company seeking additional DOD work. One CEO escalating. Another CEO being corrected by his own PR team. And Anthropic's backers, caught between the two, choosing a side they hadn't expected to choose.

Sixty Billion

Anthropic has raised more than $60 billion from over 200 investors. The most recent round — a $30 billion Series G led by GIC and Coatue in February 2026 — was announced three weeks before the investor pressure became public. Before that, a $13 billion Series F in September 2025. Before that, a $3.5 billion Series E. Each round larger than the last, each at a higher valuation, each predicated on the same thesis: safety is Anthropic's competitive advantage.

The thesis has worked. Anthropic's revenue run rate hit $14 billion by February 2026 — growing over 10x annually. The a16z enterprise survey showed Anthropic gaining CIO adoption faster than any AI vendor. Microsoft became one of Anthropic's top clients. Apple's internal development "runs on Anthropic at this point."

The investors who put in $60 billion were buying a specific bet: that the AI company built on safety would win enterprise because enterprises want safety. Five weeks ago, the numbers validated that bet. The $14 billion run rate was the proof.

The Threat

Then the Defense Production Act changed the math.

The DPA isn't a revenue question. Losing Pentagon contracts would cost Anthropic some business, but the company is growing fast enough without military revenue to sustain its trajectory. The threat is structural. If the government invokes the DPA, it can compel Anthropic to provide Claude access without the safety restrictions the company insists on. If it designates Anthropic a supply chain risk, defense contractors and adjacent industries may stop working with the company entirely.

For investors, this isn't about the Pentagon's AI budget. It's about what happens to $60 billion in equity if the government turns Anthropic into a pariah — or seizes operational control of its model.

March 2026
Sources: some investors push Anthropic to de-escalate its DOD dispute and find a compromise on military AI access
Reuters

Axios reported that Anthropic's $60 billion-plus in funding came from more than 200 investors, with half entering through the most recent rounds. Many of these investors wrote checks because safety was the thesis. Now some are pushing the company to compromise on safety because the alternative — the DPA — could destroy the thesis along with their investment.

The Escalation

Amodei's response was not to de-escalate. It was to call the alternative what he sees it as.

March 2026
Leaked Friday memo: Dario Amodei called OpenAI's DOD deal "safety theater," saying OpenAI adopted Anthropic's language without its substance
The Information

The phrase "safety theater" does specific work. It says: OpenAI's red lines — which Techdirt noted "effectively adopt the words" Anthropic uses — are performance, not policy. OpenAI amended its DOD contract to include provisions against mass surveillance and autonomous weapons targeting. Amodei's argument is that adopting the language without the enforcement architecture is worse than having no language at all, because it creates the illusion of constraint.

This is a CEO who is being told by his investors to back down, and whose response is to publicly attack the competitor that did what his investors wish he would do.

The Pattern

Trace the Anthropic DOD story as a progression of pressure sources:

The pressure source shifted in five weeks. First the Pentagon. Then the White House. Then the defense establishment. Now the investors — the people who funded the company because of its principles, asking it to soften those principles because of what the government might do.

The Contradiction

The investor pressure contains a specific contradiction. The $60 billion bet was: safety produces commercial value. Enterprise customers choose Anthropic because it's the company that thinks about risks. The a16z data showed 75% of Anthropic customers run the latest models — versus 46% for OpenAI — because they trust the upgrades are safe. The $14 billion revenue run rate was built on that trust.

Investors pushing Anthropic to compromise on safety to avoid the DPA are asking the company to weaken the thing that produces the revenue that justifies their investment. The logic is: save the company from the government even if it means damaging the reason the company is worth saving.

The investors bought safety as a thesis. Now some want to sell it as a concession.

Amodei's response — escalation, not compromise — suggests he understands something his investors may not. If Anthropic compromises on military AI restrictions and the DPA threat recedes, the company survives but the thesis doesn't. Enterprise customers chose Anthropic because it was the company that held the line. A company that held the line until the line became expensive is a different value proposition entirely.

The Second Front

Meanwhile, the OpenAI all-hands revealed its own internal strain. Altman defended the DOD deal, calling the backlash overblown and telling staff the company doesn't "get to make our own foreign policy." Then he said something about pursuing additional DOD work. Then a spokesperson said he misspoke. The CEO of a $730 billion company, corrected by his own communications team on the day's most sensitive topic.

The correction matters because of what Amodei's leaked memo accused OpenAI of: theater. If OpenAI's position on military AI requires a spokesperson to clean up after the CEO, the position is less stable than the contract language suggests.

On March 4, both companies were fighting on two fronts. Anthropic fights the government and its own investors. OpenAI fights its critics and its own CEO's statements. The difference is that Anthropic's two-front war is about whether to hold a position. OpenAI's is about what the position actually is.