The richest man alive is arguing in federal court that his former partners stole the future — but the judge just told him nobody trusts him with it either.

The Summary

The Signal

This isn't just billionaire drama. The case centers on whether OpenAI betrayed its founding mission when it shifted from nonprofit to for-profit structure, but the real signal is what it reveals about governance in the agent economy. When Musk co-founded OpenAI in 2015, the pitch was simple: build AGI for humanity's benefit, not shareholder returns. Now he's arguing in federal court that his former partners broke that deal by chasing a valuation in the hundreds of billions.

The judge's comment cuts to the core tension. Musk founded xAI in 2023 — a direct competitor to OpenAI building in "the exact same space." His standing to complain about OpenAI's profit motive is undercut by the fact that he's now running his own for-profit AI lab. The hypocrisy is obvious, but it misses the deeper problem: there is no governance model for companies building superintelligence that anyone actually trusts.

"This is not a trial on the safety risks of artificial intelligence. This is not a trial on whether or not AI has damaged humanity."

The profit cap argument is where this gets technically interesting. Musk testified that capped returns wouldn't violate OpenAI's nonprofit commitments — "depending on how high the cap is." That's the whole game. OpenAI's current structure allows investors returns up to 100x their investment before excess profits theoretically flow to the nonprofit parent. Is 100x a cap or a blank check? When the company is worth hundreds of billions, the distinction evaporates.

Key structural points:

  • OpenAI started as a pure nonprofit in 2015, funded primarily by Musk
  • In 2019, it created a "capped-profit" subsidiary to raise serious capital
  • That cap was set at 100x returns for early investors
  • Musk left OpenAI's board in 2018, launched xAI as a for-profit competitor in 2023

The courtroom exchanges reveal how thin the original agreements were. When opposing attorney William Savitt pressed Musk on contradictions in his testimony about profit caps, Musk accused him of asking "misleading questions designed to trick him and the jury." That's not the testimony of someone with airtight documentation of broken promises. That's someone retrofitting a narrative onto ambiguous early-stage handshakes.

The Implication

Watch the governance structures of every major AI lab in the next 12 months. This trial is making it radioactive to claim you're building "for humanity" while investors are pricing your equity at tens of billions. The OpenAI nonprofit shell was always theater, but now it's expensive theater with discovery and depositions.

For anyone building agent companies: profit caps sound good in pitch decks, but they're just future litigation if you don't define "cap" with precision. Musk's own testimony shows the problem — "it depends on how high the cap is" means there was no real cap, just a vibes-based understanding that dissolved the moment serious money arrived.

The judge shut down the AI safety angle because she knows this case is actually about money and control. But the money and control questions matter for safety. If we can't figure out who should govern the companies building AGI, the technical alignment problem is irrelevant. The misalignment already happened in the cap table.

Sources

Fast Company Tech