In May 2018, about a dozen Google employees resigned over the company's involvement in Project Maven, a Pentagon program that used AI to classify drone imagery. Google dropped the contract and published seven AI principles, pledging never to build AI for weapons or surveillance. In March 2026, Caitlin Kalinowski, OpenAI's head of hardware and robotics, resigned over concerns about domestic surveillance and autonomous weapons after the company's Department of Defense contract. OpenAI continued. The distance between those two outcomes is the entire story of military AI.

Twelve People, One Contract

Project Maven was worth roughly $9 million. Google's 2018 revenue was $136 billion. The contract represented 0.007% of the company's income. Twelve resignations killed it — not because Google couldn't afford to lose the people, but because the company couldn't afford the framing. AI employees were scarce. The talent market was tight. And the narrative that Google had abandoned "Don't Be Evil" for drone warfare threatened the employer brand that attracted the best researchers in the world.

Google's response was decisive. CEO Sundar Pichai laid out principles banning AI for weapons, surveillance, and technologies that cause "overall harm." The principles became a template. Other companies adopted similar language. For six years, the Maven revolt stood as proof that employee conscience could constrain military AI.

The precedent had a price tag, and it was $9 million.

The Quiet Undoing

Google began walking it back almost immediately. In 2019, The Intercept reported that Google was funding military and police AI through Gradient Ventures, its investment arm. The principles applied to Google. They did not apply to Google's portfolio companies. By 2021, Google was aggressively pursuing the Pentagon's Joint Warfighting Cloud Capability — the successor to the JEDI contract it had declined to bid on in 2018. The New York Times noted the work "would provide the Defense Department access to Google's cloud products" and the department expected the tech "to support the military in combat."

The principles still existed. They just didn't apply to cloud infrastructure. Or subcontractors. Or joint ventures. The carve-outs multiplied until the principle was a perimeter around a shrinking island.

August 2024
Nearly 200 Google DeepMind workers, or 5% of headcount, signed a letter calling to drop military contracts
TIME

The employees kept trying. In August 2024, nearly 200 Google DeepMind workers — 5% of the division — signed a letter calling on the company to drop its military contracts. In April 2025, roughly 300 London-based DeepMind staff sought to unionize with the CWU over defense sales and Google's Israel ties. In February 2026, more than 100 employees urged Jeff Dean to block US military deals using Gemini for mass surveillance or autonomous weapons.

Two hundred. Three hundred. A hundred. Each letter larger or more organized than the last. None of them worked.

Fourteen Months

OpenAI moved faster. In January 2024, The Intercept reported that OpenAI had quietly removed a ban on "military and warfare" use from its usage policy. The company said the change was to make the policy "clearer and more readable."

In November 2024, Kalinowski joined OpenAI from Meta to lead robotics and consumer hardware. One month later, OpenAI partnered with Anduril to deploy AI for "national security missions." Six months after that, it won a $200 million DOD contract to develop tools for "warfighting and enterprise domains."

Project Maven (Google dropped it)
OpenAI DOD contract (continues)

Kalinowski was building robots while the company was building for the Pentagon. Fourteen months after joining, she left.

She was not alone. On March 2, Sarah Shoker — who led OpenAI's Geopolitics Team for three years before leaving in June 2025 — wrote that frontier AI labs' military usage policies are "incoherent, vague, and often change," designed to let company leadership preserve "optionality." Two days later, Techdirt reported that OpenAI's "red lines" in its DOD agreement use terminology the NSA has spent decades redefining to permit the very things the words appear to prohibit.

The policies say "no weapons." The contracts say "warfighting." The words don't mean what they used to mean.

The Leverage Shift

In 2018, twelve people had enough leverage to kill a contract because the economics allowed it. Maven was trivial revenue. The talent market was the real asset. Companies competed for AI researchers, and those researchers could credibly threaten to leave for places that shared their values.

Three things changed.

First, the money. Anduril expects revenue of $4.3 billion this year — doubling from 2025 — with operating losses of $1.2 billion. Defense AI is no longer a $9 million side project. It is an industry. OpenAI's DOD contract alone is worth twenty-two Project Mavens.

Second, the compulsion. The Pentagon is no longer asking. It designated Anthropic a supply chain risk for refusing to remove safeguards. Today, Bloomberg profiles Emil Michael — the executive who made his name as Uber's aggressive dealmaker — as the Pentagon's point person in its dispute with Anthropic. The government imported Silicon Valley's own tactics to use against Silicon Valley. And the GSA drafted guidance requiring AI companies in civilian contracts to allow "any lawful" government use of their models. The procurement rules are being written to make refusal impossible.

Third, the talent market. AI researchers are no longer scarce in the way they were in 2018. The field has expanded. The leverage that came from being irreplaceable has diluted. When Kalinowski left, OpenAI did not publish new principles. It did not revisit the DOD contract. It posted the job.

Two Coalitions, One Direction

On February 27, the same day OpenAI closed $110 billion at a $730 billion valuation, two coalitions of workers from Amazon, Google, Microsoft, and OpenAI asked their companies to join Anthropic in refusing the Pentagon's demands. The coalitions were larger than any previous effort. They included employees from four of the five most valuable companies in the world.

Nothing changed.

Anthropic's resistance is the last version of the Maven precedent — a company saying no to the Pentagon on principle. The government's response has been the Defense Production Act, supply chain designations, and an aggressive dealmaker dispatched to force compliance. If Anthropic buckles, the precedent dies entirely. If it doesn't, the government has already demonstrated it will use legal compulsion.

Employee letters cannot override the Defense Production Act.

The Resignation

The word does double duty. Kalinowski resigned from OpenAI. The broader industry is resigning to something else — the understanding that employee conscience, which once stopped a Pentagon contract at the world's most powerful technology company, no longer has that power.

The employees didn't change. Google's DeepMind workers are still writing letters. OpenAI's geopolitics team lead still published her concerns. Kalinowski still walked out. The objections are as clear and principled as they were in 2018.

What changed is the structure around them. When military AI was a $9 million experiment, companies could afford to listen. When it became a $200 million contract inside a $730 billion company, backed by a government willing to invoke wartime mobilization laws, the math stopped working. Conscience became personal. It stopped being institutional.

In 2018, resignation was a threat. In 2026, it is an exit.

1 Project Maven valuation estimated from public reporting. Google's 2018 revenue: $136.8B (Alphabet annual report). Kalinowski's tenure: November 2024 (TechCrunch) to March 2026 (Fortune).