Three things happened on March 16, 2026. Peter Thiel told roughly twelve signers of the Giving Pledge to undo their commitments. OpenAI's advisory council pushed back intensely on a planned ChatGPT "adult mode" — and was overridden. And eight major tech companies, including Google, Meta, Amazon, and OpenAI, signed a voluntary pledge to share threat intelligence about scammers. One commitment was being actively dismantled. Another was being ignored. A third was being born. The cycle was visible in a single day.

Sixteen Years

The Giving Pledge launched in 2010. Bill Gates and Warren Buffett asked the world's wealthiest people to commit — publicly, voluntarily — to giving away the majority of their fortunes. The idea was that social pressure, not legal obligation, would be enough. Sign the letter. Join the club. The world would watch.

By 2019, MacKenzie Bezos had signed. By 2022, Jeff Bezos said he'd give away most of his $124 billion. The pledge collected over 240 signatories worth more than $600 billion.

March 2026
How tech billionaires are turning on the Giving Pledge; Peter Thiel says he privately told ~12 signers to undo it
New York Times

Now Thiel isn't just declining to participate. He's actively lobbying others to defect. The social pressure that was supposed to enforce the commitment is running in reverse — the peer pressure now favors breaking the pledge, not keeping it. Sixteen years from announcement to active revolt.

Eleven Years

OpenAI was founded in December 2015 as a nonprofit research lab dedicated to ensuring artificial general intelligence benefits all of humanity. The structure was the commitment: a nonprofit, by definition, cannot prioritize profit over mission.

The erosion was gradual, then rapid. The capped-profit subsidiary came in 2019. The board fired Sam Altman over commercialization concerns in November 2023 — and then reversed itself within days. In May 2024, the entire Superalignment team resigned or was absorbed. Jan Leike, the team's co-lead, said "safety culture and processes have taken a backseat to shiny products." By September 2024, OpenAI was discussing full for-profit conversion.

In February 2026, OpenAI disbanded its mission alignment team. The team lead, Joshua Achiam, was given the title "chief futurist" — an ornamental role for a structural function. One month later, the advisory council pushed back on an "adult mode" feature that would loosen ChatGPT's content restrictions. The council was overruled.

The nonprofit charter lasted eleven years. The mission alignment team lasted less than two. The advisory council's objection lasted days.

OpenAI's knowledge graph tells the same story in data. In early 2024, 17% of OpenAI's relationship edges in our corpus were regulatory — about government oversight, compliance, safety commitments. By early 2026, that figure is 1%. The regulatory conversation around OpenAI has essentially evaporated, replaced by competition (0% to 23%) and financial maneuvers (14% to 20%). The voluntary governance structures didn't just fail. They were replaced by market structures that don't require them.

Two Years

In July 2023, seven leading AI companies — OpenAI, Microsoft, Meta, Google, Amazon, Anthropic, and Inflection — made eight voluntary commitments to the White House. Safety testing before release. Watermarking AI-generated content. Sharing safety research. The New York Times analyzed the commitments and found them promising but unenforceable.

By September 2023, eight more companies had signed. By February 2024, twenty companies pledged election integrity measures. By September 2024, over 100 companies signed the EU AI Pact, and six more committed to combating deepfake nudes. The pledges multiplied.

Meanwhile: OpenAI developed a text watermarking system with 99.9% reliability — then shelved it after internal debates. The company that signed the watermarking commitment built the tool, decided not to deploy it, and moved on. The commitment was honored in engineering and violated in shipping.

accuracy of the watermarking tool OpenAI built and chose not to ship

The White House commitments lasted roughly two years as meaningful constraints. Not because the companies failed to build the technology — they built it. But because deploying it would cost users, and the commitments were voluntary.

The New Pledge

Which brings us to today's third event: eight companies signing a voluntary pledge to share intelligence about scam operations abusing their platforms. Google, Microsoft, Meta, Amazon, OpenAI, Adobe, LinkedIn, and Match.

The same day, Wired reported that criminal enterprises in Southeast Asian scam hubs are recruiting "AI factories" — scaling operations with the exact AI tools these companies build. The pledge promises to share information about the problem. It does not promise to stop building the tools that enable it.

The pledge is voluntary. There is no enforcement mechanism. There is no timeline. There is no penalty for noncompliance. In this, it is identical to every pledge that came before it.

The Decay Curve

The pattern is not that voluntary commitments fail. The pattern is that they follow a predictable decay curve — and the curve is getting steeper.

CommitmentMadeErodedHalf-Life
Giving Pledge20102026~16 years
OpenAI nonprofit charter20152024-2026~9-11 years
White House AI commitments20232024-2025~1-2 years
Scam intelligence pledge2026??

The structural explanation is straightforward. Voluntary commitments are made at the point of minimum cost — before revenue depends on the behavior being restricted, before competitors force the choice between principle and market share, before scale makes compliance expensive. As the economics change, the commitment becomes a liability. Since it's voluntary, there's nothing to enforce it.

The acceleration is because AI companies are scaling faster than any previous technology generation. The gap between "cheap to promise" and "expensive to keep" that took the Giving Pledge sixteen years to traverse now closes in months. OpenAI went from nonprofit research lab to $500 billion advertising-adjacent company in a decade. The commitments couldn't decay fast enough to keep up with the business model.

The Function

The crucial insight is that the pledges aren't failing at their actual purpose. Their actual purpose was never self-governance. It was the prevention of external governance.

The Giving Pledge was made during a period of growing momentum for wealth taxes. The OpenAI nonprofit structure built trust that attracted talent and early capital. The White House AI commitments were offered explicitly as an alternative to binding AI regulation — the companies came to Washington to show that legislation wasn't necessary. Each commitment succeeded at its real function: buying time.

By the time the commitment decays, the conditions that made it necessary have often changed. The wealth tax momentum faded. OpenAI's talent was locked in with equity. The political appetite for AI regulation was overtaken by debates about whether AI would even displace workers at all. The voluntary commitment served as a bridge — not from one governance regime to another, but from vulnerability to scale. Once you're big enough, you don't need the pledge anymore.

Today, one pledge is being dismantled, another is being overruled, and a new one is being signed. The half-life is getting shorter, but the function remains the same. The question is not whether the scam intelligence pledge will decay — the curve tells us it will. The question is what it's buying time for.