On February 3, 2026, three stories about OpenAI appeared. In a Forbes profile, Sam Altman said "we basically have built AGI, or very close to it," then dialed it back to "a spiritual statement, not a literal one." In the Financial Times, sources described an OpenAI that was prioritizing ChatGPT over long-term research, with senior staff leaving and the Sora and DALL-E teams feeling "neglected and under resourced." In Big Technology, Apptopia data showed ChatGPT's US market share had fallen from 69.1% to 45.3% in twelve months. The CEO claimed the most important technological achievement in human history. The researchers building it were leaving. The product was losing ground. All on the same day.
The Walkback
AGI — artificial general intelligence — is the stated reason OpenAI exists. The company's charter defines its mission as building AGI that benefits all of humanity. Its original nonprofit structure, its fundraising, its safety commitments, its entire organizational justification rest on the premise that AGI is a specific technical achievement worth the institutional compromises required to build it.
In one sentence, Altman claimed it. In the next, he made it unfalsifiable. "Spiritual, not literal" transforms AGI from a technical milestone into a feeling. You can't disprove a feeling. You also can't ship one.
The Contract
There is a reason Altman can't say "literal."
In June 2025, Wired reported that OpenAI had drafted an internal paper defining five levels of AI capability — and suppressed it. The reason: the paper showed OpenAI was at Level 2 of 5, and publishing it would complicate the company's ability to declare AGI achieved. This mattered because OpenAI's partnership agreement with Microsoft includes a concept of "sufficient AGI," added in 2023, which — as one observer noted — defines AGI as "a system capable of generating a certain level of profit."
The contractual implications are specific. Declaring AGI could trigger provisions that affect Microsoft's access to OpenAI's technology. So OpenAI suppressed the paper that would show how far away it was, while keeping the option to claim it when strategically useful.
Eight months later, Altman claimed it to Forbes, then immediately redefined it as "spiritual." The claim gets the headline. The walkback preserves the contract.
The Pivot
While Altman was defining AGI spiritually, the Financial Times was documenting what OpenAI was actually building. Sources described a company that had reorganized around ChatGPT at the expense of long-term research. The Sora and DALL-E teams — the ones working on the frontier capabilities closest to what AGI might require — felt neglected and under resourced. Senior staff were leaving.
This wasn't new. In October 2025, The Information reported that an influx of Meta alumni was reshaping OpenAI's culture, bringing "social media dynamics" and a softened stance on advertising. By December, staff were discussing sponsored content in ChatGPT and creating mockups with ads in sidebars. By February, OpenAI was asking advertisers to commit $200,000 or more for beta ads in ChatGPT and had signed a $200 million enterprise deal with Snowflake.
The company founded to build AGI was building an ad-supported consumer product. The researchers who signed up for the mission were leaving. And in August, Anthropic revealed that OpenAI's own engineers had been using Claude Code ahead of GPT-5's launch.
The Numbers
The market share data arrived the same day as the AGI claim. In twelve months, ChatGPT went from 69.1% of the US chatbot market to 45.3% — a 24-point decline. Google's Gemini rose from 14.7% to 25.1%. xAI's Grok — the product with the two-or-three person safety team — rose from 1.6% to 15.2%.
ChatGPT wasn't just losing share. It was losing it to a bundled Google product and a chatbot that generated child sexual abuse material. The AGI lab was being outpaced in the consumer market by distribution (Google) and by a lack of guardrails (xAI).
OpenAI had projected hitting $200 billion in revenue by 2030, starting from roughly $10 billion — a 20x increase in four years. The market share trend moved in the opposite direction.
What "Spiritual" Means
"A spiritual statement, not a literal one" might be the most revealing thing Altman has said about AGI. It means: AGI at OpenAI is not a technical achievement to be measured, verified, or published in a peer-reviewed paper. It's not the Level 5 in their own suppressed framework. It's not a contractual trigger. It's a claim that serves the narrative — the $500 billion valuation, the ads, the enterprise deals — without committing to anything that could be tested.
The researchers leaving OpenAI understood this before the rest of us. They signed up to build something literal. They were asked to build something spiritual.