Sam Altman tried to play peacemaker while his company seized the Pentagon contract his competitor just fumbled.
The Summary
- OpenAI CEO Sam Altman told staff he was trying to "save" Anthropic during its failed Pentagon negotiations, even as OpenAI moved to capture the contract for itself
- Internal Slack messages show Altman positioning himself as principled mediator while privately criticizing Anthropic CEO Dario Amodei for undermining him
- Within 48 hours of expressing concern about optics, Altman was exchanging draft contract language with Pentagon Under Secretary Emil Michael
- The timeline reveals OpenAI's government affairs playbook: express solidarity publicly, move decisively in private
The Signal
The Slack messages trace a ten-day period in late February when Anthropic's Pentagon negotiations collapsed and OpenAI stepped in. Axios first reported February 14 that the Pentagon was considering cutting ties with Anthropic. Ten days later, on February 24, Altman told anxious employees it might be time to work out a classified deal with Defense Under Secretary Emil Michael. Michael called Altman that same afternoon. Draft contracts started flowing the next day.
The speed matters. On February 26, Altman sent an all-staff message saying OpenAI shared Anthropic's red lines and wanted to help de-escalate. He acknowledged the optics looked bad but emphasized acting on principle over appearances. Twenty-four hours later, he was telling core staff that Anthropic's negotiations had worsened because Amodei was perceived as playing to the press. The Pentagon's 5pm deadline hit that same day.
This is not a story about corporate backstabbing. This is a story about how AI companies navigate government contracts when national security meets public positioning. Altman's internal framing reveals the tension: you can believe your competitor's principles are worth defending AND believe your company should win the contract when they fail. Both things can be true.
The Pentagon contract is classified, so we don't know the terms or scope. But we know Defense Under Secretary Emil Michael initiated contact, and we know OpenAI had draft language ready within 24 hours. That speed suggests OpenAI had been preparing its own Pentagon approach independent of Anthropic's troubles, or that it keeps government contract templates ready to deploy. Either way, it signals operational maturity in government affairs.
The Implication
Watch how AI companies position themselves on defense work going forward. Altman's attempt to claim both moral high ground and commercial victory set a template others will copy or reject. If you're building AI systems, understand that government contracts now carry reputational risk that requires active management, not just legal review. The window between "we share their concerns" and "we won the contract" is shrinking to days, not months.
Source: Axios