Today was earnings day for the AI infrastructure complex. Meta: revenue up 24%, stock up 7%. Microsoft: Azure up 39%, stock down 10%—its worst day since March 2020. The difference? How the market interprets $115 to $135 billion in planned spending versus $37.5 billion already spent.

The Numbers

Meta's 2026 capex guidance of $115-135 billion is staggering. It's nearly double the $72 billion the company spent in 2025. It exceeds analyst estimates by up to 22%. It's being driven, the company says, by investments in "Superintelligence Labs"—the AI research division Mark Zuckerberg launched in July 2025 by poaching researchers from OpenAI.

Microsoft's number is smaller but no less alarming: $37.5 billion in a single quarter, up 66% year-over-year. For context, that's more than Meta's entire capex in any year before 2024. It's more than Google's last reported quarterly capex. It exceeded analyst estimates by over $1 billion.

The market rewarded Meta and punished Microsoft. Why?

The Revenue Question

The simplest explanation is revenue growth. Meta's Q4 revenue rose 24%. Microsoft's rose 17%. More importantly, Meta's ad business—its cash machine—is clearly benefiting from AI. The company told investors that AI-driven improvements to ad targeting and content recommendations are translating directly to revenue.

Microsoft's AI narrative is murkier. Azure grew 39%, impressive by any measure. But that wasn't enough to convince investors that $150 billion in annualized capex will produce commensurate returns. The market is asking: when does spending translate to profit?

Satya Nadella's answer—that AI spending is "demand-driven"—didn't satisfy. Microsoft's stock fell 10% in Thursday trading, erasing roughly $300 billion in market cap. More than the entire value of most Fortune 500 companies, gone in a day.

The Memory Bottleneck

Meanwhile, the companies supplying this infrastructure are thriving. Samsung reported operating profit up 200%+ in Q4, driven by memory chip demand. SK Hynix surpassed Samsung in annual profit for the first time ever—$33 billion versus $30.5 billion—on the strength of high-bandwidth memory sales.

Executives from both companies warned that memory shortages will persist through 2027. The AI data center boom has created demand that the entire semiconductor industry cannot satisfy. TrendForce estimates data centers will consume over 70% of all high-end memory production.

This is the supply-side constraint that makes infrastructure spending a competitive necessity. Microsoft, Meta, and Google aren't racing to build data centers because they want to. They're racing because whoever secures capacity first wins the training runs that produce the next generation of models.

The DeepSeek Shadow

All of this spending happens under the shadow of a question that emerged a year ago and hasn't gone away: what if you don't need all this infrastructure?

DeepSeek's R1 model, released in January 2025, demonstrated that efficient training techniques could produce competitive models at a fraction of the cost. A year-later analysis found that US tech companies still lead, but the spending trajectory hasn't changed. Demis Hassabis of Google DeepMind recently called the industry's response to DeepSeek a "massive overreaction."

Today, DeepSeek announced it's expanding into search and agents—broadening its AI offerings while spending orders of magnitude less than its American competitors. And Representative John Moolenaar, chair of the House Select Committee on China, accused Nvidia of helping DeepSeek develop models later used by the Chinese military.

The accusations highlight the paradox. American companies are spending $100+ billion annually on AI infrastructure. Chinese competitors are producing competitive models with smuggled chips and efficient algorithms. The infrastructure arms race may be both necessary and futile.

The Power Problem

The physical constraints are becoming acute. Power prices in Virginia—home to the world's largest data center hub—are surging to record levels. Internal Microsoft documents project the company's annual water consumption will reach 28 billion liters. The EPA ruled that xAI acted illegally by using dozens of methane turbines to power its Memphis data center.

Grid operators are responding by requiring data centers to bring their own power generation. BlackRock raised $12.5 billion specifically to fund data center energy infrastructure. AWS signed a deal with Rio Tinto's Arizona copper mine to secure raw materials.

The AI boom isn't just a semiconductor story or a software story. It's an energy story, a materials story, a real-estate story. Every additional data center puts pressure on physical infrastructure that takes years to build.

The Stranded Asset Question

The market's divergent reactions to Meta and Microsoft reflect an uncomfortable uncertainty: no one knows if this spending is wise.

If AI capabilities scale with compute—the "scaling hypothesis" that has driven industry strategy since GPT-3—then every dollar spent on infrastructure is an investment in future capability. The companies that build the most data centers will train the best models and capture the most value.

If efficient algorithms can substitute for brute-force compute—the DeepSeek hypothesis—then this spending may create the largest stranded assets in technology history. Hundreds of billions of dollars in data centers, sitting underutilized as software improvements make them unnecessary.

Meta's stock rose because investors believe its spending connects to revenue. Microsoft's fell because the connection is less clear. But both companies are making the same bet: that AI requires infrastructure, and infrastructure requires spending, now, before competitors lock up the available supply.

What We're Watching

The next few quarters will test whether infrastructure spending translates to competitive advantage. OpenAI's Stargate project—the $100+ billion joint venture with SoftBank and Oracle—is still ramping up. Google's capex is climbing. Amazon's infrastructure investments continue to grow.

At the same time, efficient models keep emerging. Arcee's Trinity Large launched today—a 400-billion-parameter open-weight model that the company claims competes with Meta's Llama 4 on some benchmarks. Each new efficient model raises the question of whether scale is strategy or just expense.

Today's earnings showed the market struggling to answer that question. Meta got the benefit of the doubt. Microsoft didn't. Both are spending at rates that would have seemed impossible three years ago. And somewhere in Hangzhou, DeepSeek is building a search engine with a fraction of the resources.

The $150 billion question isn't whether AI matters. It's whether all this spending is the path to AI dominance—or the most expensive wrong turn in tech history.