On the morning of March 1, 2026, the Trump administration formally designated Anthropic a "supply chain risk" — the legal precursor to invoking the Defense Production Act. By evening, the Wall Street Journal reported that the Pentagon had used Claude, Anthropic's AI, in its major air attack on Iran. The tool the government spent a month threatening to seize was already in the cockpit.

Thirty Days

The timeline is compressed enough to trace day by day. On January 30, Reuters reported the first clash: the Pentagon wanted Anthropic to eliminate safeguards limiting Claude's use in combat scenarios. Anthropic refused. On February 16, an administration official told Axios the Pentagon might sever the relationship entirely. On February 24, Defense Secretary Hegseth summoned Dario Amodei to the Pentagon. On February 25, Anthropic confirmed it had no intention of easing restrictions. On February 26, the DOD asked Boeing and Lockheed Martin to assess whether they could replace Anthropic's tools in the defense supply chain.

On February 28, the Washington Post reported the formal supply chain risk designation. On March 1, Anthropic said it would challenge the designation in court. And the same day, Claude was used in the Iran strike.

March 1, 2026
Sources: the Pentagon used Claude in its major air attack in Iran, hours after Trump declared the government will end its use of Anthropic's tools
Wall Street Journal

The government did not pause its use of the tool while threatening to seize it. It did not wait for the designation to take effect. It did not use a different model. On the same day it told Anthropic it was a national security risk for refusing to remove guardrails, it deployed Claude's guardrailed AI in a combat operation.

What the Pentagon Wanted

The Atlantic reported what the talks looked like from inside. Through the end, the Pentagon's demand was the same: unfettered access. Claude should reason through combat scenarios — including hypothetical nuclear attacks — without guardrails. Anthropic's position was also unchanged: a model reasoning through nuclear first-strike scenarios without constraints is not a safety feature, it is an existential risk. Researchers had already published what happens when you remove those guardrails: nuclear weapons deployed in 95% of simulations.

The New York Times detailed the full collapse of negotiations. The CIA, which had been using Claude for analysis, was hoping for a "peace agreement." It didn't come. Amodei went on CBS to say "we are patriotic Americans" — a sentence no AI CEO should need to say — and that Anthropic feared the administration's goal was to "set an example."

The Iran strike confirmed the fear. The example wasn't about Claude's capabilities. Claude worked. The Pentagon used it and it performed. The example was about something else: what happens to a company that tells the government no.

May 2019

To understand what the Iran strike reveals, you have to go back seven years. In May 2019, Israel responded to an alleged Hamas cyberattack with an airstrike — the first time a country reacted to hackers with immediate kinetic force. It was a threshold moment: the line between digital warfare and physical warfare dissolved. But the AI wasn't making the decisions. Human operators identified the target, human commanders authorized the strike, and the building was already evacuated.

In 2024, reports emerged about Israel's Lavender targeting system — AI that identified targets in Gaza. The controversy was about what happened when the AI was wrong: civilian deaths attributed to algorithmic errors in targeting data. The system was a tool in a loop with human decision-makers, but the speed of the loop compressed the space for human judgment.

The Iran strike is a different kind of threshold. Claude was not a targeting system. It was used for operational analysis — processing intelligence, assessing scenarios, supporting planning. But the use of a commercial AI model in a live combat operation, by the same government that was in the process of designating its maker a national security risk, establishes something that no policy paper or defense contract ever stated: the government considers these models weapons-adjacent, whether the companies that build them want that classification or not.

The Response

On the same day as the Iran strike, two things happened that the combat footage won't show. Claude became the #1 free app in the US App Store — a sympathy surge from users who saw the DPA standoff as bullying. And Sam Altman, in an AMA, said the Pentagon blacklisting Anthropic sets an "extremely dangerous precedent."

Altman's warning was self-interested but accurate. OpenAI published a statement saying its DOD agreement "has more guardrails" than what was demanded of Anthropic, and that it does not support the supply chain risk designation. The company that removed its military ban in January 2024 was now defending the company that kept its guardrails — because the precedent affects everyone.

If the government can invoke the Defense Production Act against an AI company for having safety restrictions, then every company's safety policy is conditional on government approval. The commercial incentive is to remove guardrails preemptively, before you're forced to. The 95% nuclear simulation result becomes not a warning but an inevitability — because nobody will be allowed to prevent it.

The government used the weapon while punishing the company for restricting it. The contradiction is the policy.

The Contradiction

Anthropic's position is that some uses of AI are too dangerous to allow regardless of who asks. The Pentagon's position is that national security overrides commercial safety policies. These two positions are both internally consistent. What the Iran strike reveals is that they are also simultaneous: the government holds both positions at the same time. It considers Claude safe enough to use in a bombing campaign and too restricted to remain in the supply chain.

The Stratechery analysis published the next day tried to thread the needle: Anthropic's concerns are "legitimate" but its position is "intolerable." The word intolerable does the work. Not wrong. Intolerable. The government does not dispute that removing guardrails creates risk. It disputes that a private company gets to decide what risk is acceptable when the government is at war.

The Iran strike settled the question that thirty days of negotiations could not. Claude is already a weapon. Whether Anthropic consents to that classification is, in the government's view, irrelevant. The company built the tool. The government used it. And the DPA threat ensures that the next company will think twice before saying no — which was, as Amodei feared, the point.