Congress is trying to turn Anthropic's fight with the Pentagon into law.
The Summary
- Sen. Adam Schiff is drafting legislation to codify Anthropic's red lines, ensuring humans make final decisions on lethal force and mass surveillance
- Sen. Elissa Slotkin introduced a companion bill specifically limiting DoD's AI-powered surveillance of Americans
- This follows the Trump administration blacklisting Anthropic as a supply-chain risk after it restricted military use of its models
- What was a corporate policy dispute is becoming a constitutional battle over AI governance
The Signal
Anthropic drew a line. No autonomous weapons. No mass surveillance. The Pentagon didn't like it. The Trump administration blacklisted the company earlier this month, calling it a supply-chain risk. Anthropic sued, claiming constitutional violations. Now two Senate Democrats want to make Anthropic's voluntary limits mandatory for everyone.
This is the collision point. A private AI lab said no to certain military applications. The executive branch retaliated. Now the legislative branch is trying to turn corporate ethics into statutory requirements. Schiff's bill would reportedly mandate human decision-making for lethal force and surveillance questions. Slotkin's focuses narrowly on preventing AI-driven mass surveillance of Americans by the Defense Department.
The timing matters. OpenAI, Google, and others are all racing for defense contracts. If Anthropic's stance becomes law, every AI company working with the military faces the same constraints. If it doesn't, Anthropic's position looks less like principles and more like unforced commercial self-sabotage.
This is also a stress test for AI governance. Can a company refuse government contracts on ethical grounds without facing punitive action? Can Congress set guardrails on military AI use that the executive branch will actually respect? These questions have been theoretical until now.
The Implication
Watch what happens to this legislation. If it passes, even in watered-down form, it establishes a precedent that AI companies can set limits and Congress will back them. If it dies, the signal to labs is clear: take the contracts or get blacklisted. For anyone building in the agent economy, this defines the boundaries. Your models will either be multipurpose or they'll be defense-compatible. There's going to be a lot less room for both.
Source: The Verge AI