Anthropic just announced it's going to audit the code AI agents actually use, not the code we wish they used.
The Summary
- Anthropic launched Project Glasswing, a security initiative to identify and harden critical open-source software that AI systems depend on
- The project starts with security reviews of widely-used libraries that AI agents interact with, focusing on the supply chain underneath the models
- This is Anthropic betting that AI security isn't just about the models, it's about the entire software stack agents touch when they act in the real world
The Signal
Project Glasswing marks a shift in how AI companies think about security. Instead of just making models safer, Anthropic is looking at the pipes and plumbing. When an AI agent reads your email, books a flight, or moves money, it's running on libraries that were written for humans, not autonomous systems. Most of that code has never been audited with the assumption that an AI might be the one calling it thousands of times per second.
The timing matters. As agents move from demos to production, the attack surface isn't the model anymore. It's the decades-old open-source libraries doing the actual work. The same npm package that's been fine for web developers becomes a different kind of risk when an agent with API access to your bank account is using it. Anthropic is essentially saying: we need to know what's in the supply chain before we ship agents that depend on it.
This is also a quiet admission about where we are in the agent economy. The bottleneck isn't model capability anymore. It's trust. Companies won't deploy agents at scale if they don't know what those agents are running on. Glasswing is Anthropic trying to build that trust by going upstream, auditing the code that sits between Claude and the real world.
The Implication
If you're building agent infrastructure, pay attention to what Glasswing flags. Those security reviews will become the de facto standard for what "agent-safe" means. If you're investing in the agent stack, watch for companies that build hardened versions of common libraries specifically for AI use cases. The picks-and-shovels play here isn't just model APIs, it's secure-by-default tooling for the software agents actually execute.
Sources: Hacker News Best | Anthropic