A filmmaker went looking for creative tools and found a eugenics fan club instead.
The Summary
- Director Valerie Veatch dove into OpenAI's Sora community in 2024 expecting to find artists, found casual racism and sexism baked into the outputs and normalized by the users
- The people building with gen AI tools weren't concerned that their machines defaulted to bias, they were excited about what they could make anyway
- This isn't a bug in the technology. It's a feature of who's building it and what they're optimizing for.
The Signal
Veatch went into the Sora community the way most people approach new creative tools: curious, open, hopeful. What she found was a different kind of signal entirely. The AI wasn't just occasionally producing biased outputs. It was consistently generating racist and sexist imagery, and the community around it wasn't alarmed. They were building anyway.
This matters because it exposes the gap between how AI companies sell these tools and what actually happens when you put them in users' hands. OpenAI positions Sora as a creative democratization play. But if the tool's defaults encode historical biases and the early adopter community shrugs at that, you're not democratizing creativity. You're scaling discrimination with a render button.
The pattern here is familiar from Web2: build fast, ignore the systemic issues, let the community norms calcify, then act surprised when the thing you built amplifies society's worst impulses. Except now we're doing it with tools that can generate synthetic realities at volume. The implications aren't just about bad outputs. They're about who gets to define what "normal" looks like when machines are doing the defining.
Veatch's response was to make a documentary about what she saw. That's one path. But the larger question is structural: if the agent economy is being built by people who don't care that their agents carry forward eugenic logic, what does that economy look like in five years? Who gets served by it, and who gets erased?
The Implication
If you're building with gen AI tools, audit your outputs. Not once, not as a checkbox. Continuously. The biases aren't edge cases. They're central tendencies. And if you're leading a company deploying these tools, the early adopter community you attract is a leading indicator of the culture you're creating. If they're fine with casual bigotry in the outputs, you have a culture problem that no amount of responsible AI PR will fix. Watch who rallies around your product and what they're willing to tolerate. That tells you more than the demo ever will.
Source: The Verge AI