TikTok says AI ads must be labeled. Samsung and others are ignoring it. Nobody's enforcing anything.

The Summary

The Signal

TikTok introduced mandatory AI disclosure labels for advertising in 2024. The policy looked clean on paper: if you use generative AI to create an ad, you mark it. Simple transparency for a world drowning in synthetic content. Two years later, major brands are running AI-generated content unmarked, and TikTok isn't doing anything about it.

Samsung, a company that publicly champions responsible AI development, is the current example. Their TikTok ads show telltale signs of AI generation (the kind of visual artifacts people who look at this stuff all day can spot), but no disclosure label. The fine print doesn't help either. Someone inside Samsung's marketing chain knows definitively whether AI tools generated those videos. They're choosing not to share that information with the hundreds of thousands of people seeing the content.

This matters because we're watching the collapse of AI labeling before it ever really started. TikTok has over a billion users. It's where Gen Z gets news, products, and increasingly, their view of reality. If the platform can't or won't enforce its own transparency rules on paying advertisers (the people they actually have leverage over), what hope is there for labeling organic content? For tracking AI slop in feeds? For helping anyone distinguish what's real?

The deeper problem: voluntary disclosure systems don't work when there's no penalty for ignoring them. Samsung isn't breaking any laws. TikTok isn't losing ad revenue by looking the other way. Users can't tell the difference anyway. Everyone's incentive is to stay quiet. The only people who lose are the ones trying to maintain some grip on what's authentic and what's manufactured.

The Implication

If you're building products that depend on content authenticity, assume platform policies won't protect you. Build your own detection. If you're making purchasing decisions based on social media advertising, assume everything is AI until proven otherwise. The trust layer is broken, and nobody with power to fix it seems interested in doing so.


Source: The Verge AI