The $10 billion AI training company just exposed the people who taught its models how to think.
The Summary
- Mercor, a $10B AI training startup, faces five lawsuits in one week after contractors' Social Security numbers, addresses, and interview recordings were compromised via breached LiteLLM software
- Hackers reportedly accessed Slack data and videos of contractor conversations with AI systems, exposing the invisible labor force behind AI models
- Historical data breach settlements pay $1-5 per class member, a rounding error for Mercor but a preview of compliance costs in the agent economy
The Signal
Mercor built a $10 billion valuation on the backs of gig workers who train AI models, then exposed their most sensitive data through compromised open-source infrastructure. The breach hit LiteLLM, an open-source project created by Berrie AI that Mercor relied on. According to leaked materials, hackers got recordings of contractors talking to AI systems, Slack conversations, and the W-9 forms contractors submitted every time they took a gig.
This matters because it exposes three fault lines in the agent economy. First, the companies building AI at scale depend on armies of contractors to label data and train models, but treat them as expendable infrastructure with contractor-grade security. Second, these platforms are stitching together open-source tools like LiteLLM without the security rigor their valuations suggest. Third, the legal precedent for data breaches (typically $1-5 per affected person in settlements) hasn't caught up to the unique risks of AI training data. Interview recordings with AI systems aren't just personal information, they're training data that shapes future models.
One plaintiff, NaTivia Esson, worked for Mercor from March 2025 to March 2026, submitting fresh W-9 forms with each new assignment. She trusted the company with her Social Security number every time. Now she's anticipating years of identity monitoring costs while Mercor, valued at 20,000x what she'll likely recover in a settlement, moves on.
The Implication
If you're doing contract work for AI companies, assume your data is in a S3 bucket somewhere with "temporary" access controls. If you're building agent infrastructure, understand that your security posture is now your liability posture, and open-source dependencies are the new supply chain risk. The companies rushing to automate human work are shockingly casual about protecting the humans still doing it.
Source: Business Insider Tech