A DOD filing revealed on March 18 that the government designated Anthropic a supply chain risk because the company "could disable its tech" if the Pentagon crossed its "red lines." The same day, Axios published Ramp spending data from companies buying AI tools for the first time: Anthropic is capturing 73% of that spending, up from a 50/50 split with OpenAI in January. Two documents. Opposite verdicts. Same capability under review: the right to refuse.

The Language

The filing is the government's response to Anthropic's lawsuit challenging the designation, and the language is worth reading precisely. The DOD did not say Anthropic failed to deliver a product. It did not say Claude performed poorly. It did not allege a contract breach. The Department of Defense said Anthropic could — not did, could — disable access to its technology if the military crossed ethical limits the company had set for itself.

March 2026
Filing: the US DOD said it designated Anthropic a supply chain risk over concerns the AI company could disable its tech if the Pentagon crossed its "red lines"
Wired

In procurement language, "supply chain risk" typically means a vendor might fail to deliver due to financial instability, foreign ownership, or production constraints. The government has now added a new category: moral autonomy. The risk is not that Anthropic can't deliver. The risk is that it might choose not to.

This is the clearest articulation of the logic that drove seven weeks of escalation. The DPA threats, the blacklisting, the public posturing from Pentagon AI chief Emil Michael — all of it traced back to one concern. Not what Anthropic did, but what it reserved the right to do.

The Number

Anthropic vs. OpenAI share of new enterprise AI spending (January)
Same metric, seven weeks later (March)

The entire shift occurred during the period the government was trying to punish the company. These are not existing customers deepening their commitments. They are first-time buyers — companies entering the AI market and deciding who to trust with their data, their workflows, their compliance risk. They chose, at 3-to-1 ratios, the company the government just blacklisted.

The broader metrics confirm the acceleration. In the weeks since the dispute began: Claude's daily signups quadrupled. Anthropic's run-rate revenue hit $19 billion, up from roughly $9 billion a year earlier. The company's all-time revenue since 2023 crossed $5 billion. None of this happened despite the Pentagon conflict. The chronology suggests it happened, at least in part, because of it.

The Stress Test

Enterprise buyers don't choose AI vendors on ideology. They choose on risk. And the Pentagon dispute inadvertently proved something that no sales pitch or compliance certification could: Anthropic's safety commitments are load-bearing.

Before February, the "Responsible Scaling Policy" was a document. Anthropic said it would refuse certain uses. But every AI company says things like that. OpenAI had responsible use policies when it quietly removed its military ban in January 2024. Google had AI Principles when it dropped weapons exclusions in February 2025. Policies are written on paper. They are easy to adopt and easier to abandon when the pressure arrives.

The Pentagon was the pressure. The government deployed every tool of coercion available — the Defense Production Act, supply chain risk designation, defense contractor blacklisting, Emil Michael's public pressure campaign — and Anthropic sued rather than comply. Dario Amodei wrote that his company could not "in good conscience" accede to the Pentagon's demands. Then he said Anthropic would fight the designation in court. Then the company filed the lawsuit.

For a CIO evaluating AI vendors in March 2026, that seven-week stress test is worth more than any audit or certification. Anthropic was tested by the most powerful institution on Earth and did not budge. The 73% is what that proof of concept is worth.

The Ranks

The other revelation from March 18: the New York Times reported that several tech companies, including OpenAI, are privately encouraging the DOD to back away from the designation.

This is not altruism. Sam Altman took public swipes at Anthropic just twelve days earlier. OpenAI signed a deal with AWS the day before to sell AI services to the same government agencies Anthropic can no longer reach. The companies competing hardest against Anthropic are simultaneously lobbying to protect its right to have ethical limits.

They have to. If the government can designate "the capacity to refuse" as a supply chain risk, every company's red lines become a liability. OpenAI's own limits — narrower than Anthropic's, but they exist — are next. Google's are. Amazon's are. The designation isn't a precedent anyone in the industry wants on the books, because none of them want to live in the world where it stands.

Microsoft filed an amicus brief supporting Anthropic's lawsuit. Google and Amazon publicly committed to keeping Anthropic's tools in their products. Google DeepMind Chief Scientist Jeff Dean and thirty-plus employees from OpenAI, Google, and Anthropic signed an open letter. The industry that agrees on almost nothing agreed on this.

Every Direction

Seven weeks in, the Pentagon's action has produced the opposite of every intended effect.

The goal was to coerce Anthropic into providing unfettered access. Anthropic sued. The goal was to demonstrate the cost of defiance. The market rewarded it — 73 cents of every new dollar. The goal was to isolate Anthropic from the defense ecosystem. Microsoft, Google, and Amazon drew closer. The goal was to show the AI industry what happens when a company says no. The industry closed ranks behind the one that did. And when OpenAI signed an AWS deal to capture the government business Anthropic lost, Microsoft began weighing legal action against both Amazon and its own $13 billion investment — collateral damage the Pentagon never intended.

The White House is now preparing an executive order on AI company designations. The administration appears to recognize this has gone somewhere it didn't intend.

The Term

"Supply chain risk" will be how historians mark the moment the US government decided that an AI company's ethical autonomy constituted a national security threat. The filing is remarkable not because it's wrong on the facts — the DOD is correct that Anthropic might refuse an order — but because it states the logic without euphemism. A company that might enforce its own principles is, by definition, an unreliable supplier.

The market heard the same argument and drew the opposite conclusion. Seventy-three percent of new enterprise AI buyers are choosing the company that reserves the right to refuse. They are not buying despite the red lines. They are buying the red lines.

The investors asked whether safety was an asset or a liability. The competitors asked whether they shared the same limits. The government asked whether those limits could be removed by force. March 18 delivered the answers. The government created the test. The market delivered the verdict.