The guy who spotted Anthropic early is now trying to build the power grid for AI.
The Summary
- Anjney Midha, former a16z GP and early Anthropic investor, has unveiled AMP, a venture attempting to raise $10B+ to build a "grid for AI servers"
- The pitch: centralized access to compute infrastructure, modeled on how electrical grids distribute power
- This is infrastructure play meets marketplace, betting that AI compute becomes a utility
The Signal
Anjney Midha's AMP is going after a real bottleneck. Getting GPU clusters right now is like trying to buy a house in 2021. You either know someone, you overpay, or you wait. Most AI developers are locked into whatever cloud provider gave them credits or whoever they could convince to sell them chips. That's not a market, that's a scarcity game.
The grid metaphor matters. Electricity didn't take off when every factory had its own generator. It took off when Edison and Westinghouse built distribution systems that let anyone plug in and pay for what they used. Midha is betting AI compute follows the same path. Instead of every AI lab scrambling to secure their own server clusters, they tap into a shared infrastructure layer that handles procurement, deployment, and allocation.
The $10B+ fundraise signals this isn't about reselling AWS instances. This is building actual data centers, buying chips at scale, and becoming the intermediary between Nvidia's supply chain and every AI company that needs to train models. AMP would sit between hardware manufacturers and developers, smoothing out the peaks and valleys of compute demand.
The timing tracks. We're past the phase where only OpenAI and Google can afford to train frontier models. Hundreds of companies now need serious compute, but not enough to justify building their own infrastructure. That's the gap AMP wants to fill. If they pull it off, they become the ComEd of AI, the boring monopoly everyone depends on and nobody thinks about.
The Implication
Watch who invests in this round. If sovereign wealth funds and infrastructure-focused LPs pile in, that validates the utility thesis. If it's mostly AI-focused VCs, this is more speculative. Either way, centralized compute infrastructure is coming. The question is whether it consolidates around one player or fragments across regions and regulatory boundaries. For AI developers, this could mean finally escaping cloud vendor lock-in. For everyone else, it's another reminder that AI runs on very physical, very expensive hardware that someone has to own.
Source: The Information