The labor model powering AI training isn't just exploitative—it's designed to be invisible until it isn't.

The Summary

  • Meta terminated its entire contract with Sama, the Kenyan contractor employing 1,100+ workers who reviewed footage from Meta AI Glasses, immediately after workers spoke to journalists about what they witnessed
  • The firing wasn't about legality—it was about publicity. The work continues elsewhere, just further from view
  • The real crime: building consumer AI products whose core function requires a labor model so repulsive it only works if no one knows about it

The Signal

Meta didn't fire Sama's Kenyan operation because the work violated policy. They fired them because the workers talked. The moment investigative reporters published what those 1,100 contractors were actually seeing—raw footage from users' AI glasses, reviewed without the users knowing it was happening in that moment—the only move was to cut the cord. Fast.

The work itself continues. Different contractors. Different country, probably. Same model: capture user data, route it to the cheapest labor market you can find, have humans teach the AI what it's looking at. The difference is that the new contractors will sign tighter NDAs. The facilities will be harder for journalists to access. The paper trail will be cleaner.

"Crimes against public perception require as much cover-up as crimes against the law."

This is the infrastructure layer of Web4 that nobody wants to examine. Every foundation model, every multimodal AI, every system that can "understand" images or video—they all rest on millions of human decisions. Someone has to label the training data. Someone has to review edge cases. Someone has to teach the model what's a cat and what's a threat.

The entire consumer AI boom is built on a workforce model that only functions in darkness. Not because it's illegal, but because knowing how it works changes whether people want to use the product. When you buy AI glasses, you think you're buying computer vision. You don't think you're buying a direct line between your eyeballs and a data labeling facility in Nairobi.

Key economics of the hidden layer:

  • AI companies need massive human input during training and refinement
  • That labor needs to be cheap enough to scale and temporary enough to dispose of
  • The workers need to be far enough away—geographically and contractually—that their working conditions never become the company's PR problem
  • Until they do

The Onavo comparison is exact. Meta ran a VPN that spied on user behavior to gather competitive intelligence. Probably legal. Definitely disgusting once people understood it. The solution wasn't to stop doing versions of that work—it was to stop getting caught doing it in ways the public could understand.

This isn't a story about one bad contractor relationship. It's about the core tension in the agent economy: the systems that are supposed to replace human labor currently depend on vast pools of invisible human labor to function. And that labor model is sustainable only as long as it stays invisible.

The Implication

If you're building in AI, ask where your training data comes from and who's labeling it. Not because you'll necessarily stop using those services, but because the supply chain is a liability. The companies that get caught with their labor practices exposed don't stop operating—they just get better at hiding.

If you're investing in AI infrastructure, price in the governance risk. Every model trained on human-labeled data has a paper trail that leads to real people in real places doing real work. When those people talk, valuations move.

The future of work isn't humans versus agents. It's humans doing the work that teaches agents how to work, getting paid less and less as the agents get better, and being structurally prevented from talking about what they see. That model works until someone talks. Then you find new humans and make the NDAs tighter.

Sources

Daring Fireball