The CEO of the world's most valuable AI company just blamed his competitor for getting firebombed.
The Summary
- Sam Altman told podcaster Ashlee Vance that "the way Anthropic talks about OpenAI doesn't help" when discussing the recent Molotov cocktail attack on his San Francisco home
- Altman cited AI doomerism and inter-lab rhetoric as contributing factors to the attack, marking a new low in the OpenAI-Anthropic rivalry
- The OpenAI CEO went through what he described as "a real depressive cycle" after the attack, telling Vance "there's gonna be more stuff like this"
The Signal
When someone throws a firebomb at your house, you can either blame the person who threw it or the people whose words might have inspired them. Sam Altman is doing both.
In a podcast interview with Ashlee Vance posted Tuesday, the OpenAI CEO directly named Anthropic, his company's chief rival, as part of the problem. "I think the doomerism talk hasn't helped. I think the way certain other labs talk about us hasn't helped," Altman said, before adding the specific callout: "I think the way Anthropic talks about OpenAI doesn't help."
"I was just, like, you know, there's gonna be more stuff like this, and it's incredibly disheartening."
It's unclear exactly which Anthropic statements Altman means. The article notes he didn't specify. But the fact that he named them at all, in the context of a violent attack on his home, signals how deep the rift has become between the two AI labs. This isn't a conference stage disagreement about scaling laws. This is one CEO saying another CEO's public rhetoric contributed to him getting firebombed.
After the attack, Altman wrote a blog post referring to "the Shakespearean drama between the companies in our field," though he didn't name names there. He also suggested a recent New Yorker exposé may have made things more dangerous for him. The common thread: words have consequences, especially when you're building technology that people either see as humanity's salvation or its extinction event.
Key context:
- Authorities identified the suspect as Daniel Moreno-Gama
- Altman described going through "an adrenaline shock" followed by a depressive cycle
- He characterized the experience as "very scary" and expects more incidents
The rivalry between OpenAI and Anthropic has always been personal. Anthropic was founded by former OpenAI research leaders who left over safety concerns. They've positioned themselves as the more cautious, more responsible AI lab. That positioning requires a foil. OpenAI, racing to ship AGI while Anthropic talks about Constitutional AI, makes for a convenient one.
The Implication
The AI safety debate just got weaponized, literally. When the CEO of the leading AI company says a competitor's safety messaging contributed to violence against him, we're in new territory. Either Altman is right that doomer rhetoric inspires attacks, or he's using a traumatic incident to score points against a rival. Neither option is good.
Watch how Anthropic responds. If they stay silent, it looks like admission. If they push back, it looks callous. The smarter move would be a joint statement from all the major labs condemning violence and committing to depersonalize the safety debate. But that would require these companies to see each other as something other than existential threats, which seems unlikely given what Altman just said on the record.