The empire's best customers just announced they're building competing empires—and the market shrugged.
The Summary
- Google and Amazon told investors they're moving to sell their custom AI chips (TPUs and Trainium) directly to customers, not just through their clouds
- Amazon CEO Andy Jassy said there's a "good chance" they'll sell full Trainium racks within two years; Google CEO Sundar Pichai committed to delivering TPUs to select customers in their own data centers this year
- One analyst called the shift "irreversible," but Nvidia's stock barely flinched—the question is whether the market sees the threat or is still drunk on AI infrastructure spending
The Signal
For three years, Nvidia's customers have been writing checks so large they needed their own accounting category. Google, Amazon, Microsoft, Meta—the hyperscalers burning billions on GPUs to train foundation models and serve inference at scale. The relationship was simple: Nvidia makes the best chips, everyone else buys them.
That arrangement just got complicated. Google and Amazon don't want to be customers anymore. They want to be competitors.
"Virtually all AI thus far has been done on Nvidia chips, but a new shift has started."
The shift Jassy described isn't hypothetical. Amazon's Trainium chips have been internal tools—available only to AWS customers who rent compute time. Same with Google's TPUs. You could use them, but you couldn't own them. That's changing. Pichai's commitment to deliver TPUs to customer data centers in 2026 is the first time Google has publicly put a timeline on competing directly with Nvidia in the chip market.
Here's what makes this different from the usual "Nvidia killer" headlines:
- These aren't startups with PowerPoints. Google and Amazon have been designing custom silicon for years and have production infrastructure at scale.
- They control massive cloud platforms where most AI training already happens. They can bundle chips with services in ways Nvidia can't.
- They're not trying to beat Nvidia on raw performance. They're trying to beat them on cost and vertical integration for specific workloads.
Jassy's letter to shareholders framed it bluntly: most AI has run on Nvidia chips, but that's starting to change. He didn't say "might change" or "could change." He said it's already happening. Amazon is betting that enterprises training models at scale will want dedicated Trainium racks they own outright, not just cloud instances they rent.
The irony is thick. Nvidia's explosive growth came from selling picks and shovels to the AI gold rush. Now the biggest buyers of those picks are building their own tool factories. And they're using the revenue from renting Nvidia's tools to fund it. Amazon and Google spent billions on H100s and Blackwells to build their clouds. Those clouds print money. That money funds Trainium and TPU development. Which will, eventually, reduce their dependence on Nvidia.
The Implication
This doesn't kill Nvidia tomorrow. The company still makes the fastest chips and has a software moat (CUDA) that's genuinely hard to replicate. But the hyperscalers aren't trying to replicate it. They're building around it. They're targeting workloads where extreme performance matters less than cost efficiency and integration with their platforms.
Watch for the moment when Google or Amazon announce a flagship AI model trained entirely on their own silicon. That's when the narrative shifts from "interesting experiment" to "existential threat." If you're building AI products, start testing on TPUs or Trainium now. Not because they're better, but because cloud lock-in is real and your compute costs will drop when platforms push their own chips harder. The agent economy runs on inference, and inference runs on whatever's cheapest at scale.