The lawyer who extracted a billion dollars from Meta is now systematically hunting AI companies, and his next case claims ChatGPT turned someone into a stalker.
The Summary
- Jay Edelson, the class-action litigator behind major Big Tech settlements, is targeting AI companies with a wave of lawsuits, including a share of Anthropic's billion-dollar copyright settlement
- His firm has filed three cases against OpenAI and Google in the past year, with another OpenAI suit dropping as soon as next week
- The upcoming case alleges ChatGPT "turned a boyfriend into a stalker," marking a shift from copyright claims to direct harm allegations
The Signal
The litigation landscape around AI just entered a new phase. Edelson isn't chasing theoretical harms or abstract copyright questions anymore. He's building a portfolio of cases that connect AI outputs to real human damage. The stalker case, if it proceeds, represents something the industry hasn't faced yet: liability for how people use AI tools, not just what data trained them.
This matters because Edelson has a track record. His firm was part of the team that got Anthropic to settle for a billion dollars over copyright infringement. That's not a nuisance settlement. That's validation that courts see AI training as potentially actionable. Now he's expanding the attack surface. Copyright was just the beachhead.
The agent economy runs on trust that AI outputs are tools, not actors. But if courts start seeing chatbots as contributors to harm rather than neutral instruments, the liability math changes fast. OpenAI, Google, and Anthropic are already fighting model collapse, hallucination problems, and compute costs. Add in legal exposure for downstream use cases and you're looking at a fundamentally different risk profile for every company building agentic systems.
Edelson's Chicago operation, complete with the sheepadoodle and pickleball breaks, is deliberately anti-Valley. He's positioning as the populist check on AI ambition, and he's picking cases that play well: creepy boyfriend behavior, copyright theft, chatbot lies. These aren't abstract. They're tabloid-ready, which means jury-friendly.
The Implication
If you're building AI products, the question isn't whether you'll get sued. It's whether your risk model accounts for liability beyond your direct control. User harm cases, if they stick, mean your legal exposure scales with adoption, not just with what you built. Watch how OpenAI responds to the stalker case. If they settle, it's a signal they see the writing on the wall. If they fight, we're in for a years-long battle over where AI liability begins and ends.
Source: The Information