Anthropic just built an AI so good at finding security holes that they won't let you use it.
The Summary
- Anthropic launched Project Glasswing, pairing an unreleased frontier model (Claude Mythos Preview) with twelve major tech and finance companies to find vulnerabilities in critical infrastructure before attackers do
- The model is too dangerous for public release, with partners including AWS, Apple, Microsoft, Google, Nvidia, JPMorgan, and Linux Foundation getting exclusive access
- Anthropic is backing this with $100M in usage credits plus $4M in direct donations to open-source security orgs, while their revenue just hit $30B annualized (up from $9B three months ago)
- The initiative aims to enable defensive cybersecurity work with virtually no human intervention, expanding access to over 40 organizations that build or maintain critical software
The Signal
This is the agent economy eating its first truly mission-critical infrastructure layer. Anthropic's Claude Mythos Preview represents a threshold crossing: an AI model capable enough at offensive security research that its creators won't risk public release, but powerful enough that the world's biggest tech companies are willing to restructure their security operations around it.
The timing matters. Anthropic's revenue tripled in three months to a $30B run rate, with over 1,000 customers now spending $1M+ annually (that number doubled in under two months). They signed a multi-gigawatt compute deal with Google and Broadcom and poached a senior Microsoft infrastructure exec. This isn't a research project. It's a land grab for the security layer of every Fortune 500 company.
The coalition structure is the real tell. When AWS, Apple, Microsoft, Google, and Nvidia all sign up as launch partners for the same security tool, they're not collaborating out of altruism. They're hedging. If one AI model becomes the de facto standard for finding zero-days in critical infrastructure, you want to be inside that tent. The $100M in credits and $4M in direct funding to open-source security projects is Anthropic buying legitimacy and coverage, making this a public-private partnership in everything but name.
What Newton Cheng (Anthropic's frontier red team cyber lead) told The Verge is the key line: this will enable defensive work "with virtually no human intervention." That's not automation of existing security workflows. That's replacement. Security researchers who spend weeks tracing exploit chains are about to compete with an agent that runs 24/7, doesn't need coffee, and apparently finds things dangerous enough that Anthropic won't let researchers outside this coalition even see the model.
The Implication
If you're in cybersecurity, the question isn't whether AI agents will do your job. It's whether you'll be inside or outside the coalition that controls the best ones. Anthropic just drew that line, and it runs through twelve companies and 40-plus organizations. Everyone else is now playing catch-up or waiting for the public version, which may never come. For enterprises, this is a forcing function: your security posture increasingly depends on which AI models you can access, not which people you can hire. Start asking your vendors which models power their tools and who controls access.
Sources: VentureBeat | The Verge AI | TechCrunch AI | Bloomberg Tech