Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until Friday evening to provide the US military with unfettered access to Claude. The alternative: the Department of Defense would invoke the Defense Production Act — a wartime mobilization law last used to compel the production of ventilators — or designate Anthropic a "supply chain risk," effectively blacklisting the company from the defense ecosystem. The word that matters is "unfettered." It is the opposite of everything Anthropic was built to do.

Eleven Days

The ultimatum did not arrive without warning.

Four escalations in eleven days. Warning, threat, summons, ultimatum. Each one ratcheted the pressure from administrative to existential. And each time, the demand was the same: remove the limits.

The Others Said Yes

To understand why Anthropic is being singled out, look at who wasn't.

In June 2025, OpenAI signed a $200 million contract with the Department of Defense. In December, Google built GenAI.mil, a dedicated platform for the US military. The same month, xAI partnered with the Pentagon to embed its frontier AI systems in defense operations. Three companies. Three yeses.

Anthropic said no — not to all military use, but to unfettered use. And it specifically refused to agree to the "all lawful use" standard the Pentagon wanted, which would permit Claude for any application that isn't explicitly illegal. The distinction Anthropic drew is the one its founders left OpenAI to draw: some uses are legal but still harmful, and a responsible AI company should refuse them.

February 2026
Sources: DOD told Anthropic it will invoke the Defense Production Act or label Anthropic a “supply chain risk”, if not given unfettered Claude access by Friday
Axios

The government's response to the only lab that said no was the Defense Production Act.

2018

In May 2018, about a dozen Google employees resigned in protest over Project Maven, a Pentagon contract for AI-powered drone image classification. Google eventually withdrew from the project. The employees made a choice, the company made a choice, and the government accepted both.

Eight years later, the choice is gone. The Defense Production Act does not accept no for an answer. It was designed for wartime: steel production in World War II, industrial mobilization during Korea, ventilators and vaccines during COVID. In January 2024, the Biden administration considered using it to require companies to report when they trained AI models that could pose national security risks. Reporting. Two years later, the Trump administration is using it to compel access.

The Paradox

Anthropic was founded in 2021 by former OpenAI researchers who believed AI development required stronger safety constraints. Its constitutional AI approach, its interpretability research, its responsible scaling policy — all of it was designed to ensure that powerful AI systems would have limits.

Those limits are precisely why the Pentagon wants Claude. The safety research that makes Claude predictable, interpretable, and controllable is what makes it trustworthy enough for defense applications. The military doesn't want an unpredictable system. It wants one that has been rigorously constrained — and then it wants those constraints removed for its own use.

The government is demanding the product while rejecting the process that produces it.

On the same day as the ultimatum, Anthropic updated its Responsible Scaling Policy, separating the threshold for autonomous AI capabilities from the threshold for military applications. It was a concession — creating a dedicated military pathway with different rules. But it was not the concession the Pentagon wanted. The Pentagon wanted "unfettered." Anthropic offered a door. The Pentagon wanted the wall removed.

The Market

Also on February 24, software stocks rebounded after weeks of selloff. The catalyst, according to CNBC: Anthropic announced enterprise partnerships integrating Claude into workflows at Slack, Intuit, DocuSign, and FactSet. The market treated the same day's news — government compulsion and enterprise expansion — as a single buy signal.

IBM was still down 13.15% from Anthropic's COBOL blog post. Software stocks were rising on Anthropic's partnership announcements. And Anthropic was valued at roughly $350 billion on a $5-6 billion employee share sale. One company was simultaneously crashing a legacy giant, lifting the software sector, and being threatened with wartime powers by the US government.

Across the Atlantic, European military officials worried that "tech sovereignty" ideas could undermine NATO. The worry is structural: Europe has no Anthropic to compel. It has no OpenAI to contract. The AI that will define the next era of defense is being built in San Francisco and requisitioned in Washington. Europe is watching from a distance that grows wider every week.

Unfettered

The New York Times published a piece the same day about how Silicon Valley has long ignored China's looming threat to Taiwan — the island whose semiconductor fabs produce the chips that power every AI model, including Claude. Apple announced it was moving some Mac mini production to Houston. The physical supply chain is starting to move. The AI supply chain is being requisitioned in place.

There is no precedent for what happened on February 24. A $350 billion company — one that exists because its founders believed AI needed limits — was told by the United States government to remove those limits by Friday or face wartime industrial compulsion. The other major AI labs had already complied. The market treated the news as bullish. Europe had nothing equivalent. And the chips that power all of it sit on an island a hundred miles off the coast of a country that wants to take it.

Anthropic has said it has no intention of easing Claude's restrictions on military use. The Defense Production Act does not require intention. It requires compliance.