Steve Bannon just backed the AI safety crowd against the Pentagon, and that should tell you how strange the defense tech landscape has become.
The Summary
- Steve Bannon said Anthropic "had it right" in refusing the Pentagon's terms to operate Claude with minimal guardrails, calling it "too dangerous."
- The Pentagon blacklisted Anthropic after CEO Dario Amodei rejected a deal over concerns about mass domestic surveillance and fully autonomous weapons, then quickly signed OpenAI instead.
- Bannon's concern isn't AI safety theater—it's transparency about what weapons manufacturers actually do with frontier models behind closed doors.
The Signal
The Anthropic-Pentagon standoff that started in February just got its most unlikely validator. Bannon, speaking at the Semafor World Economy Summit, framed his support around a question that cuts deeper than the usual AI safety debate: "What is happening in the weapons lab with AI? We have no earthly idea."
This matters because Bannon represents a strain oftech skepticism that has nothing to do with effective altruism or longtermism. He's not worried about paperclip maximizers. He's worried about Lockheed Martin and Raytheon getting unrestricted access to models that can design weapons systems, optimize kill chains, and automate targeting decisions with zero public oversight.
"The central thing is what is happening in the weapons lab with AI. We have no earthly idea."
When Defense Secretary Pete Hegseth pressed Anthropic to accept the Pentagon's terms of use or lose its contract, Amodei drew two red lines: no mass domestic surveillance, no fully autonomous weapons. The Pentagon's response was immediate and punitive. They labeled Anthropic a supply chain risk and barred federal agencies from using Claude.
Then they signed OpenAI within weeks. That speed tells you everything about how badly the Pentagon wants frontier models, and how little patience they have for companies that want guardrails on military applications.
Key dynamics at play:
- OpenAI is now the default AI vendor for U.S. defense applications
- Anthropic filed suit in March against Hegseth and multiple federal agencies
- The "supply chain risk" label effectively kills Anthropic's government business across all agencies
What makes Bannon's position fascinating is that it scrambles the usual political alignments. The Trump-adjacent right typically supports defense tech acceleration. But Bannon is treating weapons-grade AI like he treats immigration or trade: a sovereignty issue where American institutions (including the military-industrial complex) can't be trusted without democratic oversight.
The Implication
Watch for this fracture to widen. If populist conservatives start asking hard questions about autonomous weapons and surveillance AI, the defense tech consensus breaks down. Right now, OpenAI has a clear path to becoming the operating system for U.S. military AI. But if Anthropic's lawsuit reveals what the Pentagon actually wanted to do with Claude, the political coalition supporting that arrangement might collapse faster than anyone expects.
The real test: does Bannon's position stay fringe, or does it become the opening position for a broader fight about who controls the models that control the weapons?