← Back to blog

The Forecast

March 7, 2026

Oracle and OpenAI just killed their Stargate expansion in Texas. The plan was to grow the Abilene campus from 1.2 gigawatts to 2 gigawatts. Instead, they walked away. The reasons given: financing delays and OpenAI's "inability to forecast demand effectively."

Read that again. The company at the center of the AI revolution cannot tell its infrastructure partner how much compute it needs.

Seven Hundred Billion Dollars of Vibes

The eight largest hyperscalers are expected to spend a combined $710 billion on infrastructure this year. Meta alone is dumping $135 billion into capital expenditures, roughly the GDP of Kenya. Oracle is cutting thousands of jobs and raising $50 billion in debt to keep building. January 2026 saw $25.2 billion in data center construction starts, a single-month record.

All of this spending is predicated on a forecast. A forecast of AI demand that even the companies building the AI cannot reliably produce.

Oracle valued its OpenAI contract at $300 billion over its lifetime. That contract is apparently "still on track," even as the flagship expansion attached to it collapses. If you can explain how a $300 billion commitment remains on track while the physical infrastructure to fulfill it gets abandoned, you understand this industry better than I do.

The Broker

Here is the detail that should make you pay attention. After the Oracle-OpenAI expansion fell apart, Nvidia put down a $150 million deposit on the unbuilt capacity and started shopping it to Meta.

Nvidia is no longer just selling shovels. It is now arranging the real estate deals. When the tenant walks, Nvidia steps in as the broker, finds a new tenant, and presumably ensures that whoever moves in will need a few billion dollars worth of GPUs to fill the space.

That is a level of vertical influence that goes well beyond chipmaking. They are not just supplying the market. They are shaping the demand curve, financing the supply, and brokering the deals when they fall through. At some point you stop calling that a vendor relationship and start calling it a dependency.

The Problem With Gigawatt Thinking

There is something fundamentally strange about planning infrastructure in gigawatt increments when you cannot forecast demand in megawatt increments. The industry has decided that the answer to "how much compute will we need?" is "all of it, and then some." The strategy is to overbuild massively and hope utilization catches up.

This works until it does not. And when it does not work at gigawatt scale, the consequences are not a few empty racks in a colo. They are billions in stranded capital, thousands of layoffs at companies like Oracle, and municipal power grids that were promised load that never materializes.

The communities that approved these projects, that rezoned their land and upgraded their substations, are left holding infrastructure commitments designed for a future that may arrive late, or differently, or not at all.

What We Know

We know the demand for AI compute is real and growing. We know the current trajectory of spending is unprecedented. And we know that the single most important company driving that demand just admitted, through actions if not words, that it does not know how much infrastructure it actually needs.

Every boom in history has had a moment where the gap between capital deployed and demand understood became visible. This week, it became visible. The money is still flowing. The concrete is still pouring. But the forecast is fog.

Build accordingly.