← Back to blog

The Telegraph

March 16, 2026

Today Jensen Huang walks onto a stage in San Jose and tells twenty thousand people what his chips will need two years from now. Not what they can do. What they will demand.

This is the part of GTC that most coverage misses. The product announcements get the headlines. The power numbers are the real story.

The Escalation

Two years ago, NVIDIA shipped the NVL72. Seventy-two GPUs in a rack. 120 kilowatts. Data center operators scrambled. Many buildings could not deliver that kind of density. Some still cannot.

Last year, NVIDIA unveiled Kyber. 144 GPU sockets. 600 kilowatts. A single rack consuming more power than a city block of homes. They announced it not because it was ready to ship, but because data center operators needed a two-year head start to provision the electrical and cooling infrastructure to support it.

This year, with Feynman on the 2028 roadmap, the expectation is that NVIDIA will set new targets. Likely exceeding a megawatt per rack.

Read that again. One rack. One megawatt. That is a small factory's worth of power, concentrated into a space the size of a refrigerator.

The Inversion

There is something deeply strange happening here. The chip company is dictating the building requirements. Not the other way around.

Traditionally, infrastructure constrains compute. You build a data center with a certain power envelope, a certain cooling capacity, and you fill it with whatever hardware fits. The building is the constant. The silicon adapts.

NVIDIA has inverted this. The silicon sets the pace. The buildings must adapt. And because buildings take years to design, permit, and construct, NVIDIA has to telegraph its punches far in advance. Jensen is not just announcing products at GTC. He is issuing construction orders to an entire industry.

Why This Matters

The average wait time for grid connection in primary data center markets now exceeds four years. Think about that timeline. NVIDIA releases a new GPU architecture every year. The electrical grid moves at a quarter of that speed.

This is why NVIDIA partnered with EPRI and Prologis to study smaller data centers. Five to twenty megawatts. Not because small is better, but because small is possible. You can get five megawatts of power to a site in months. Getting five hundred takes years and a utility's blessing.

It is also why operators with existing power allocations hold an enormous structural advantage. If you already have the watts, you skip the queue. Everyone else is waiting for permits while you are racking GPUs.

The Admission

There is a quiet admission buried in all of this. NVIDIA's roadmap assumes the rest of the world will reorganize itself around their chips. They are probably right. But the gap between silicon capability and infrastructure readiness is widening, not narrowing.

Every GTC keynote is a clock. Jensen stands on stage and says: here is what is coming, here is how much power it needs, and you have this many months to figure it out.

The chip is never the bottleneck anymore. The building is. The transformer is. The water is. The grid connection that takes four years to approve for hardware that will be obsolete in two.

That is the real announcement today. Not the chip. The constraint.