Nvidia just told us the ceiling on AI isn't compute anymore, it's the power grid.
The Signal
Jensen Huang's $1 trillion revenue projection through 2027 isn't a sales forecast. It's a declaration that Nvidia has rewritten the economics of intelligence production. The company's chips now improve performance-per-watt so fast that historical comparisons break down. We're watching Moore's Law get turbocharged by existential necessity.
Here's what matters: Nvidia's market share dropped from 100% to 65% in three years, yet they're posting record earnings. That's not a contradiction. It's proof the entire AI chip market is exploding faster than any single player can capture. The pie is growing so fast that losing share still means gaining billions.
But the real story is the pivot everyone's missing. Training AI models was Nvidia's fortress, their chips perfectly optimized for that brute-force work. Now the industry is shifting to inference, running those models at scale, and inference is all about efficiency. Different game, different rules. Huang's rare blog post about energy being "the ceiling on how much intelligence can be produced at all" wasn't philosophy. It was acknowledgment that Nvidia needs to prove it can dominate the next phase or watch competitors eat their margins.
The power constraint is real. Data centers are already hitting grid limits in Northern Virginia and other AI hubs. Nvidia's survival depends on making chips that squeeze more intelligence per kilowatt-hour, not just more FLOPS per dollar.
The Implication
Watch which companies start building their own inference chips in the next 18 months. If Nvidia can't own inference efficiency the way it owned training performance, we're heading for a genuinely competitive AI chip market. That means lower costs for running agents at scale, which means the agent economy arrives faster and weirder than anyone's pricing in today.
Source: Axios