Jamie Dimon just said the quiet part loud: AI agents aren't just coming for jobs, they're coming with cyberattacks the world isn't ready for.
The Summary
- JPMorgan's Dimon says the U.S. faces more concurrent geopolitical risk than any point since WWII, with AI agent-driven cyber threats at the top of his list
- He was briefed on Anthropic's unreleased Mythos model, which the company fears could dramatically increase catastrophic cyberattack capabilities
- His nine-threat stack: China, cyber, Iran, Russia, rogue AI, private credit crisis, unsustainable debt, political dysfunction, nuclear weapons
- The kicker: "AI makes cyber, and these agents make cyber, far worse"
The Signal
The chairman of America's largest bank just confirmed what security researchers have been screaming into the void for months. AI agents don't just automate spreadsheets and customer service. They automate vulnerability discovery, exploit development, and coordinated attack execution at a scale and speed that makes traditional cybersecurity look like a moat against a tsunami.
Dimon was briefed on Anthropic's Mythos model, an AI system so capable at offensive cyber operations that the company hasn't released it. Think about that. Anthropic, a company founded on AI safety principles, built something they're afraid to ship. And they showed it to Jamie Dimon. This isn't theoretical anymore. The agent economy is already producing weapons-grade capabilities.
The timing matters. We're watching the first wave of autonomous AI agents hit enterprise workflows. Companies are deploying them to write code, manage infrastructure, analyze systems. Every helpful agent that can read your codebase and suggest improvements is also an agent that could, with different instructions, find every exploitable weakness. The same reasoning capabilities that make agents useful make them dangerous.
Dimon's threat stack isn't random anxiety. It's pattern recognition from someone who saw 2008 coming and has spent two decades war-gaming systemic risks. When he puts "rogue AI" and "cyber" near the top, next to nuclear weapons and great power conflict, he's not being dramatic. He's pricing in a world where AI agents lower the skill floor for catastrophic attacks while raising the ceiling for damage. A world where a hostile state doesn't need to recruit hundreds of hackers when a dozen people with the right model can do exponentially more.
The "chickenshit CEO" comment cuts deep. Dimon says business leaders made a mistake not getting more involved earlier, that politicians alone won't fix society's problems. Translation: the people building and deploying AI agents have been too quiet about the second and third-order effects. They've been maximizing deployment speed while minimizing uncomfortable conversations about what happens when these tools proliferate.
The Implication
If you're building or deploying AI agents, Dimon just told you the honeymoon is over. The same capabilities that automate knowledge work will automate offensive cyber operations. Assume Mythos-class models are 12-18 months from wider availability, either through leaks, independent development, or changing risk calculus at labs. That means your security posture needs to assume attackers with AI co-pilots who never sleep, never miss patterns, and iterate at machine speed. The organizations that survive the next decade won't be the ones that deployed agents fastest. They'll be the ones that deployed them most carefully and hardened their systems against agents attacking them.
Source: Axios