The Pentagon told Anthropic they were "nearly aligned" one week after Trump killed their partnership, according to court filings that make the government's national security case look shakier than its own timeline.
The Summary
- Anthropic filed sworn declarations challenging the Pentagon's "unacceptable national security risk" claim, revealing DoD sent positive signals just days after Trump publicly severed ties
- The government's case reportedly hinges on technical misunderstandings and issues never raised during months of actual negotiations
- This isn't just about one AI contract. It's about whether the executive branch can retroactively rewrite procurement reality to fit political narratives
The Signal
Anthropic's filings reveal a timeline problem that should worry anyone building AI infrastructure the government might actually need. The Pentagon was telling Anthropic in private they were close to alignment. Then Trump makes a public declaration killing the relationship. Then the legal justification arrives later, dressed up as national security concerns that apparently weren't urgent enough to mention during months of technical discussions.
This matters because the agent economy requires stable partnerships between frontier AI labs and government. Not just defense, but intelligence, civilian infrastructure, regulatory frameworks. If the deal terms can shift based on executive whim rather than technical reality, no serious AI company can price that risk. You can't build long-term research programs or allocate capital when the ground rules change between breakfast and lunch.
The "technical misunderstandings" claim is particularly damaging. It suggests the Pentagon's legal team either didn't understand what their own technical evaluators were negotiating, or understood perfectly well and found new objections after political winds shifted. Either scenario is bad. One means incompetence in evaluating AI capabilities. The other means the procurement process is theater.
For Anthropic specifically, this is existential beyond one contract. They've positioned themselves as the "responsible AI" company, the one governments can work with. If that reputation gets torched by a national security label that emerged post-hoc, their differentiation versus OpenAI or other labs collapses. The sworn declaration format isn't casual. They're betting their credibility that the government's story doesn't hold up to discovery.
The Implication
Watch how this discovery process unfolds. If Anthropic can prove the Pentagon's technical team gave green lights while lawyers retroactively found red flags, it exposes how political AI procurement has become. For other AI labs, the lesson is stark: written technical agreements mean less than political temperature. For government AI adoption generally, this delays everything. No frontier lab will negotiate in good faith if the deal can be undone by tweet and justified later with security theater. The agent economy needs boring, predictable contracts. This is the opposite.
Source: TechCrunch AI