TikTok can't label AI-generated ads because the people running the ads won't tell them.
The Summary
- Samsung and other advertisers are running AI-generated ads on TikTok without proper disclosure, despite platform requirements
- The gap isn't detection failure. It's an honor system that assumes companies will self-report
- Platforms built trillion-dollar ad businesses on surveillance but can't identify synthetic content in their own inventory
The Signal
TikTok requires advertisers to disclose AI-generated content. Samsung ignored that requirement. The broader issue is that no platform, not TikTok, not Meta, not YouTube, has built reliable automated detection for AI-generated ads. They're all running on self-disclosure. Companies check a box saying "this is AI" or they don't. That's the entire enforcement mechanism.
This matters because the disclosure requirement exists precisely because platforms admitted they couldn't reliably detect synthetic content at scale. So they offloaded the problem to advertisers. Except advertisers have zero incentive to label content as artificial when the whole point is making it feel authentic. Samsung's ads look real because they're designed to. The watermarks and metadata signals everyone promised would solve this are trivial to strip or were never implemented consistently in the first place.
The technical challenge is real. AI detection tools struggle with false positives and degrade as models improve. But the larger failure is structural. Platforms know everything about who made an ad, how much they paid, what tools they used in the creation process. They have the business relationship. They process the payment. But they've chosen not to verify the one claim that matters for transparency: whether the content is synthetic.
The Implication
If you're building in the agent economy, understand that trust infrastructure is still manual and broken. Automated systems won't save us from intentional nondisclosure. The companies best positioned to verify AI usage are the ones selling the ad space, but only if they choose to care.
Watch for regulatory pressure to shift from "label your AI" to "platforms must verify." That's where this breaks. When liability lands on the distribution layer, not the creator, disclosure becomes enforceable.
Source: The Verge AI