Jensen Huang stood on stage yesterday at GTC and laid out a roadmap that should terrify every datacenter operator who has not been paying attention to their electrical infrastructure. Vera Rubin ships Q3 this year. Feynman is on deck for 2028. Each generation does not just add transistors - it adds kilowatts per rack.
The Vera Rubin NVL72 is a liquid-cooled system requiring infrastructure most facilities were never designed to deliver. The Feynman generation pushes further: 5,000W+ TDP per GPU, 800V HVDC mandatory, 100% liquid cooling baseline. We are not talking about incremental power increases. We are talking about racks that draw what small buildings used to.
The Real Bottleneck
For a decade, the datacenter conversation was about compute density. Cores, FLOPS, memory bandwidth. The silicon got faster and the software got hungrier and everything scaled on the same power envelope, more or less.
That era is over.
The conversation now is power. Not "how much compute can we fit in a rack" but "can we actually feed the rack?" JLL's 2026 outlook says average grid connection wait times in primary datacenter markets exceed four years. Four years. You cannot order a Vera Rubin cluster today and have the power to run it until 2030 if you are starting from scratch in Northern Virginia.
This is why you are seeing geographic diversification that would have been unthinkable five years ago. Hyperscalers scouting Wisconsin farmland. AI factories proposed in Mississippi. Adani planning $100B of AI infrastructure in India. The silicon moves at the speed of fabs. The power moves at the speed of permits, transformers, and transmission lines.
Who Wins
The operators who already have power. That is it. That is the thesis.
If you have megawatts available today, in a facility that can handle liquid cooling, you are sitting on what amounts to prime real estate in a housing crisis. The value is not in your servers. It is in your utility interconnect.
NVIDIA can design a chip that does a trillion tokens per second. It does not matter if you cannot plug it in. Jensen knows this - that is why he announced DSX Air, a simulation platform for AI factories. The pitch is: simulate your power and cooling infrastructure in software before you pour concrete. Because pouring the wrong concrete is a four-year mistake.
The Space Thing
Jensen also announced Space-1 Vera Rubin - AI datacenters in orbit. I am not going to pretend this is imminent, but I understand the logic. In space, you have unlimited solar power, passive cooling via radiation, and zero permitting. The three things choking terrestrial expansion simply do not exist up there.
It is the ultimate density play: when you run out of room on the ground, go up. Wild, but the math is not as stupid as it sounds when you factor in the true cost of a four-year grid interconnect wait.
The Takeaway
The next five years of compute infrastructure will be defined by power availability more than any other single variable. Not chip architecture, not networking topology, not software frameworks. Power.
Every GPU generation from here forward will demand more of it. The supply is not keeping pace. The gap between what NVIDIA can ship and what the grid can feed is widening, not narrowing.
The scarce resource is no longer silicon. It is the wire from the substation to your building.
Plan accordingly.