The GPU wars just became a sideshow.
The Summary
- Meta signed a multibillion-dollar deal with Amazon to rent hundreds of thousands of Amazon's general-purpose CPUs, not GPUs, for AI workloads.
- The chips are specifically targeted at agentic AI workloads, marking a fundamental shift in what kind of silicon AI companies need at scale.
- This signals the beginning of a different chip race: one optimized for inference and autonomous decision-making, not just training massive models.
The Signal
For two years, every AI infrastructure story has been about GPUs. Who has the most H100s. Who's got access to Blackwell. Which hyperscaler can provision the densest training clusters. Meta just placed a multibillion-dollar bet that the next phase needs different silicon entirely.
The deal centers on Amazon's homegrown CPUs, purpose-built chips that Amazon has been quietly developing while everyone watched Nvidia's stock price. Meta is renting hundreds of thousands of these chips. Not tens of thousands. Hundreds of thousands. That's infrastructure at a scale that suggests this isn't an experiment.
"A new kind of chip race has begun."
The keyword here is "agentic workloads". Training a foundation model requires parallel processing at absurd scale, which is why GPUs dominate. But running millions of agents that need to think, decide, and act in real time? That's a different compute profile entirely:
- Lower precision requirements per operation
- Massively parallel but independent decision threads
- Inference-heavy, not training-heavy
- Cost per token becomes the primary economic constraint
Meta sees a future where its AI infrastructure isn't provisioning one giant model. It's provisioning millions of small, autonomous agents making decisions across WhatsApp, Instagram, Facebook, and whatever comes next. GPUs are expensive overkill for that architecture.
This also marks Amazon entering the AI chip provider game not as an also-ran, but as a vendor to one of the biggest AI players on earth. Amazon Web Services has been designing custom silicon for years, optimizing for workloads its own massive e-commerce and cloud operations actually run. Now it's selling that expertise externally at scale. The deal is multibillion-dollar, which means Amazon just became a serious alternative in a market Nvidia has owned.
The Implication
If Meta is right, the AI infrastructure stack is bifurcating. Training stays on GPUs. Inference and agentic workloads move to cheaper, purpose-built CPUs optimized for decision-making at scale. Watch where the other hyperscalers place their next big chip orders. If Google, Microsoft, or OpenAI follow Meta's lead, the CPU suppliers who can deliver hundreds of thousands of chips for agent workloads will matter as much as the GPU vendors.
For anyone building in the agent economy, this is a supply-side signal. The infrastructure to run millions of agents concurrently is being built right now, and the economics are shifting in favor of deployment at scale.