The lawyer who extracted a billion dollars from Big Tech over privacy is now hunting AI companies, and his next lawsuit claims ChatGPT created a stalker.
The Summary
- Jay Edelson, class-action litigator who secured settlements from Facebook and Anthropic, is now targeting OpenAI and Google with multiple AI-related lawsuits
- Shared in a $1 billion Anthropic settlement over copyright infringement in the past year
- Planning to file a case against OpenAI next week where a woman claims ChatGPT "turned her then-boyfriend into a stalker"
- Filed three other cases against OpenAI and Google regarding AI chatbots that "epitomize this moment of growing unease with the technology"
The Signal
Silicon Valley has faced regulatory scrutiny and congressional hearings, but Jay Edelson represents something more immediate and expensive: a Chicago litigator with a proven track record of making tech companies pay actual money. His Anthropic settlement alone hit ten figures. Now he's applying the same playbook to AI.
The timing matters. AI companies are in a legal gray zone. They scraped the internet for training data, launched products that sometimes hallucinate dangerous advice, and built systems whose outputs they can't fully predict or control. That combination creates liability surface area. Edelson sees it. His upcoming ChatGPT stalker case, whatever its merits, signals where this is headed: personal harm claims, not just copyright disputes.
Three separate cases already filed against OpenAI and Google suggest he's testing multiple legal theories. Copyright was just the warmup. The real fight will be over what happens when AI systems cause measurable harm to individuals. Did ChatGPT provide information that enabled stalking? Did it generate defamatory content? Did it give medical advice that hurt someone? These aren't hypotheticals anymore.
The agent economy can't scale if every AI interaction carries unlimited liability exposure. But right now, that's exactly what companies face. No established safe harbor. No clear standard of care. Just a litigator who made Facebook pay and thinks AI companies are next.
The Implication
If you're building AI products, you need to understand that legal risk isn't coming from regulators alone. Class-action lawyers are ahead of Congress. They're filing cases now, testing theories, finding plaintiffs. The stalker lawsuit sounds wild, but it's a signal. Edelson is probing for what sticks. One successful case creates a template for thousands more. Companies rushing to ship AI features should be pricing in legal defense costs that dwarf their compute bills. This is the tax on moving fast and breaking things, coming due.
Source: The Information