The guy steering the most powerful AI company in the world has a trust problem, and it's not getting better.
The Summary
- A new New Yorker profile catalogs years of colleagues and board members saying Sam Altman isn't consistently truthful, including a former board member who calls him "unconstrained by truth."
- This isn't new: OpenAI's board fired Altman in 2023 for not being "consistently candid," then he reverse-couped his way back into power within days.
- OpenAI itself published warnings about AI concentrating power and evading human control, which hits different when the CEO running it has a pattern of people not wanting to work with him twice.
The Signal
OpenAI was founded in 2015 explicitly to manage AI risk. The nonprofit structure was the point. The idea was that building AGI, genuine artificial general intelligence, required guardrails that profit incentives alone couldn't provide. Then the company evolved into a bizarre for-profit/nonprofit hybrid, Altman consolidated power, and the people who questioned him mostly left.
The 2023 board firing wasn't a blip. It was the culmination of a pattern documented in the New Yorker piece. Multiple former associates describe someone who operates without constraint. What made that firing remarkable wasn't the accusation, it was how fast Altman reversed it. Within days, he was reinstated, the board was gutted, and dissent was neutered. That's not governance. That's capture.
OpenAI's own risk assessment warns about "misaligned systems evading human control" and "power becoming more concentrated." The company is now worth over $150 billion, controls the dominant foundation models powering the agent economy, and is led by someone whose former board members say they can't trust. The misalignment isn't theoretical anymore. It's institutional.
This matters because OpenAI isn't just another startup. It's the infrastructure layer for Web4. Every agent, every enterprise AI deployment, every company building on GPT-4 or o1 is downstream of Altman's decisions. If the person making those calls has a credibility problem with the people closest to the work, that's not gossip. That's a systemic risk in the agent stack.
The Implication
If you're building on OpenAI's models, you're building on a foundation where the governance questions haven't been answered, they've been suppressed. That doesn't mean stop. It means diversify. Anthropic, Mistral, open-source models, your own fine-tuning. The agent economy needs redundancy at the model layer, not concentration around one CEO who keeps losing the confidence of his own teams. Watch what OpenAI ships, but don't bet everything on stability at the top.
Source: Business Insider Tech