On March 21, MIT Technology Review reported that OpenAI plans to launch "an autonomous AI research intern" by September 2026. The company's "North Star" is a fully automated multi-agent research system by 2028. The same day, the Financial Times reported that OpenAI plans to nearly double its headcount to 8,000 employees by December — explicitly to "stop Anthropic's momentum with business customers." One announcement is about eliminating the need for human researchers. The other is about hiring 3,500 humans. They are not contradictions. They are the same strategy.

The Two Halves

The 3,500 new hires are not researchers. The Financial Times is specific: the expansion targets enterprise sales. OpenAI is hiring humans to sell AI to businesses, to manage accounts, to staff go-to-market operations. The research — the thing that made OpenAI worth selling — will increasingly be done by the AI itself.

Planned employees by December 2026 (mostly sales and operations)
Planned human researchers needed by 2028 (North Star)

This is a company splitting in two. One half is a research lab that believes its core function — scientific discovery — can and should be automated. The other half is a sales organization that knows automated research is worthless without distribution. The former is an engineering problem. The latter is a human one.

The irony: the company founded in 2015 as a nonprofit research lab to ensure AI benefits humanity is now hiring thousands to sell AI while building AI to replace its own researchers. The humans are for sales. The research is for machines.

Thirty-Three Wikipedias

Also on March 21, the New York Times reported on "tokenmaxxing" — a status game where employees at OpenAI and other companies compete on leaderboards showing how much AI they use. One OpenAI engineer processed 210 billion tokens, enough text to fill Wikipedia thirty-three times over. The game has no output metric. It measures input — how many tokens you consume — not what you produce with them.

Tokenmaxxing is what happens when a company's culture shifts from making a thing to selling a thing. In a research lab, status comes from papers, breakthroughs, discoveries. In a sales organization, status comes from usage metrics. The leaderboard replaced the paper. Consumption replaced creation. An engineer who filled thirty-three Wikipedias with AI-processed text won a game that has nothing to do with research and everything to do with performing AI adoption.

On the same day, Jensen Huang proposed a compensation model where engineers receive an AI token budget on top of their base salary, to deploy agents as productivity multipliers. Tokens as salary. Tokens as status. Tokens as the unit of value in the AI economy. The word went from a technical term that most people hadn't heard in 2023 to a proposed form of compensation in 2026.

The Ads

The third OpenAI story on March 21 is the most revealing. The Information reported that advertisers who bought ChatGPT's first ad campaigns found the process "low tech" and received little data showing whether their ads worked. OpenAI is preparing to open ad sales to more marketers next month. Its first customers weren't impressed.

This is the gap between the valuation and the business. OpenAI is worth $730 billion. Its first ad product can't tell advertisers if their ads worked. The company that is building an autonomous research system capable of scientific discovery cannot deliver basic ad attribution metrics. The research is world-class. The sales infrastructure is a startup.

That gap is why the 3,500 hires exist. OpenAI's research moat is real but narrowing — Anthropic is capturing 73% of new enterprise AI spending. Claude Code is shipping features faster than ChatGPT's enterprise team can match. The frontier labs are converging on capability. What separates them is distribution — sales teams, ad infrastructure, enterprise relationships, government contracts. And distribution requires humans.

The Sequence

Trace the sequence across three years:

The pattern is consistent. Every six months, a research function leaves and a commercial function arrives. Safety team out, enterprise sales in. Alignment team out, ad platform in. Human researchers out, autonomous AI researchers in. The company didn't abandon research. It automated it. The humans who remain will sell what the machines discover.

The North Star

OpenAI's leadership described the fully automated multi-agent research system as their "North Star" — the thing the entire company is oriented toward. The phrasing is worth taking seriously. A north star is not a goal among goals. It is the organizing principle.

If the organizing principle of OpenAI is to automate research, then the 8,000-person company it is building is not a research organization. It is a distribution organization with an automated research engine. The humans handle sales, partnerships, government relations, compliance, marketing, content moderation, and the dozens of functions required to turn research into revenue. The research itself — the thing that once required Jan Leike, Ilya Sutskever, and hundreds of PhD researchers — will be done by an intern. An AI intern. By September.

Meanwhile, one of those humans filled thirty-three Wikipedias with tokens to win a leaderboard. In a research lab, that would be a misuse of resources. In a sales organization, it is exactly the metric that matters: demonstrating, visibly and measurably, that you are using the product.

The lab that feared AI would replace human researchers is building exactly that. The humans who remain will sell what the machines discover.