The Pentagon just called Anthropic's Chinese workforce a national security risk while simultaneously using their AI tools.
The Summary
- The Pentagon filed a court declaration stating Anthropic employs "a large number of foreign nationals" including many from China, citing China's National Intelligence Law as creating "adversarial risk."
- The Defense Department claims other major AI labs pose less risk due to "technical and security assurances" and "consistently responsible and trustworthy behavior."
- Pentagon still uses Anthropic's tools and will extend offboarding deadlines "as necessary."
- This filing is part of the Pentagon's defense against Anthropic's lawsuit challenging its designation as a supply chain risk.
The Signal
The Pentagon is trying to have it both ways. In a March 17 court filing, Pentagon undersecretary Emil Michael argued that Anthropic's foreign workforce, particularly employees from China, creates unacceptable security risks due to China's National Intelligence Law, which can compel citizens to cooperate with intelligence activities. Yet the same filing admits the Pentagon is still using Anthropic's systems and will keep extending deadlines to switch providers.
This is not about Anthropic. This is about the collision of two incompatible realities. First reality: AI development requires the best minds, and those minds come from everywhere. Chinese-origin researchers make up roughly 38-40% of top AI talent at U.S. institutions. You cannot build frontier AI without global talent. Second reality: the U.S. national security apparatus operates on zero-trust assumptions about foreign nationals, especially from strategic competitors.
The Pentagon's argument contains a tell. They say other labs with foreign workers are fine because of "technical and security assurances" and "trustworthy behavior." Translation: the other labs play ball. They take the contracts, build what's asked, don't file lawsuits about mass surveillance or autonomous weapons. Anthropic drew a line, refused certain military applications, and is now in court fighting a supply chain risk designation. Suddenly their workforce composition matters.
This sets a precedent that should terrify anyone building in AI. If you employ talented engineers from the wrong countries and refuse to fully align with defense priorities, you become a security risk. If you employ the same people but say yes to everything, you're trustworthy. The criteria isn't the workforce, it's compliance.
The Implication
Watch how other AI companies respond. If they start quietly restructuring teams or adding citizenship requirements for certain roles, you'll know the message landed. The Pentagon just outlined a playbook: global talent is fine until you stop cooperating, then it becomes a weapon against you. For founders building agent companies, this is your warning. The government will use whatever lever works. For AI researchers from China or other countries the U.S. considers strategic competitors, your presence just became ammunition in corporate battles you're not part of.
Source: Axios