February 1, 2026 produced two stories about Anthropic that seem to contradict each other. The Atlantic published a profile describing a company "caught between the pressures to be safe, fast, and rigorous while being commercially successful"—a company "at war with itself." Hours later, a16z released its annual enterprise AI survey showing Anthropic had achieved a 25% increase in CIO adoption since May 2025, the fastest growth of any AI vendor. One story describes conflict. The other describes victory. Both are accurate. Understanding why reveals something important about where the AI industry is heading.
The Survey
The a16z data is worth examining in detail. Among 100 Global 2000 companies:
- OpenAI: 78% of CIOs use its models in production. But wallet share dropped to 56%. And only 46% of OpenAI customers run the latest models—the rest stick with older versions that "work well enough."
- Anthropic: 44% adoption in production, rising to 63% including testing. A 25% increase since May 2025. And 75% of Anthropic customers run Sonnet 4.5 or Opus 4.5—the latest models.
OpenAI has breadth. Anthropic has depth. OpenAI customers aren't upgrading. Anthropic customers are. OpenAI dominates horizontal use cases—chatbots, knowledge management, customer support. Anthropic leads in software development and data analysis.
The numbers describe two different strategies for winning enterprise AI.
The Revenue
Anthropic's financial trajectory tells the same story the survey does. In July 2025, The Information reported Anthropic's revenue had hit a $4 billion annual pace—up nearly 4x from the prior year. By January 2026, sources said Anthropic had raised its internal forecasts to $18 billion for 2026 and $55 billion for 2027.
From $4 billion to $18 billion in one year is not the trajectory of a company at war with itself. It's the trajectory of a company executing.
The Wall Street Journal reported in October that corporate AI makes up roughly 80% of Anthropic's business. Microsoft has become one of Anthropic's top clients. Accenture signed a three-year deal to sell Anthropic's AI services to businesses. Apple's internal development "runs on Anthropic at this point."
This is not a company struggling to reconcile safety with commerce. This is a company that figured out how to make safety commercially valuable.
The Coverage Gap
Our rivalry analysis shows the shift in real time:
| Period | OpenAI | Anthropic | Ratio |
|---|---|---|---|
| 2024 Q2 | 168 | 26 | 6.5x |
| 2024 Q4 | 149 | 41 | 3.6x |
| 2025 Q2 | 168 | 51 | 3.3x |
| 2025 Q4 | 236 | 65 | 3.6x |
| 2026 Q1 | 96 | 54 | 1.8x |
OpenAI still leads in coverage volume. But the gap has collapsed from 6.5x to 1.8x in less than two years. Anthropic is no longer a footnote in the AI story. It's becoming a co-author.
The Tensions Are Real
The Atlantic isn't wrong that Anthropic faces tensions. They're just not the tensions the article emphasizes.
The Pentagon: Reuters reported the Defense Department is clashing with Anthropic over safeguards limiting AI use for autonomous weapons targeting and domestic surveillance. Anthropic has refused to remove the guardrails. This is a real conflict with real consequences—lost contracts, government friction.
The lawsuits: Music publishers filed a second lawsuit against Anthropic alleging copyright infringement of 20,000+ songs, seeking $3 billion. A preliminary $1.5 billion settlement on earlier copyright claims was approved in September. These are real costs.
The partnerships: Apple wanted to rebuild Siri around Claude but negotiations failed. Whatever caused the breakdown, it represents a massive missed opportunity.
Anthropic does face pressures. But the pressures aren't between being safe and being successful. They're the normal frictions of a company growing 4x per year while maintaining principles that some customers and partners find inconvenient.
The Strategy
Consider what Anthropic has actually shipped:
- Claude Code: An AI coding assistant whose ARR grew substantially through 2025
- Cowork plugins: Agentic tools that let enterprise users automate department-specific tasks
- MCP extensions: New ways for Claude to interact with external applications
- NASA navigation: Claude plotted a 400-meter path for the Perseverance rover on Mars
These are enterprise products. Developer tools. Specialized applications. They're not consumer features or entertainment partnerships.
Now consider what OpenAI has shipped recently:
- Disney partnership: Joint oversight of how Disney IP is used in ChatGPT
- "Your Year with ChatGPT": A Spotify Wrapped-style recap feature
- Tone adjustments: Letting users make ChatGPT warmer or more enthusiastic
- Ads: Testing sponsored content below ChatGPT responses
OpenAI is building a consumer platform. Anthropic is building enterprise infrastructure. These are different strategies for different markets.
Why Safety Wins Enterprise
The a16z survey finding that Anthropic leads in software development and data analysis is not coincidental. Developers and data scientists are the users who most need to trust their tools. They're writing code that will run in production. They're analyzing data that will drive decisions. Reliability and predictability matter more than novelty.
Anthropic's safety positioning—the same positioning that The Atlantic frames as a source of internal conflict—is exactly what these users want. When an AI company says "we've thought carefully about the risks," developers hear "we've thought carefully about edge cases." When a company refuses Pentagon contracts over ethical concerns, enterprise customers hear "this company won't cut corners on our deployment either."
The 75% figure—three-quarters of Anthropic customers running the latest models—reflects this trust. Developers upgrade to new Claude versions because they believe the improvements are real and the risks are managed. Only 46% of OpenAI customers do the same.
Safety isn't a constraint on Anthropic's commercial success. It's the source of it.
The Gap Between Narrative and Numbers
The Atlantic article opens with an Anthropic employee saying "things are moving uncomfortably fast." The framing suggests crisis—a company overwhelmed by the pace of events, struggling to maintain its principles amid commercial pressure.
The numbers suggest something different. Revenue forecasts of $18 billion. Enterprise adoption growing 25% in eight months. Customers upgrading at nearly double the rate of OpenAI's. Coverage gap collapsing from 6.5x to 1.8x.
These are not the numbers of a company at war with itself. These are the numbers of a company that found a way to make its principles profitable.
The Atlantic's framing isn't wrong—it's incomplete. Anthropic does face tensions. The Pentagon battle is real. The copyright lawsuits are expensive. The Apple partnership failure hurt. But the narrative of internal conflict obscures a more interesting story: a company that bet safety would be a competitive advantage and is watching that bet pay off.
Two Models
February 1, 2026 crystallized something that's been developing for months. The AI industry is splitting into two business models:
The breadth model: Maximum reach, consumer focus, entertainment partnerships, advertising revenue. OpenAI's path. The Google playbook adapted for AI. 78% of CIOs use your product, but they don't upgrade because the old version works fine. You monetize through volume and ads.
The depth model: Enterprise focus, developer trust, specialized applications, premium pricing. Anthropic's path. Fewer customers, but they're more engaged—they upgrade, they pay more, they build production systems on your platform. You monetize through value delivered.
The Atlantic's article assumes there's only one way to succeed in AI: be OpenAI. From that perspective, Anthropic's commitment to safety looks like a handicap, a source of internal tension that must be resolved in favor of commerce.
The a16z data suggests otherwise. There are two markets: consumers who want convenience and enterprises who want reliability. OpenAI is winning the first. Anthropic is winning the second. Both can be right.
What We're Watching
Anthropic's $20 billion raise will close soon. The IPO speculation continues. The revenue forecasts will either prove accurate or they won't.
But February 1 established something important: the narrative that Anthropic is struggling to balance safety and success is outdated. The numbers show a company that figured out how to make safety its competitive moat. The tensions are real, but they're productive tensions—the frictions of growth, not the paralysis of indecision.
OpenAI executives have privately expressed concern that Anthropic could beat them to IPO. That concern suggests they understand what The Atlantic's framing misses: Anthropic isn't at war with itself. It's at war with OpenAI. And the numbers say it's gaining ground.