The companies training AI to be your therapist are now training it to sell you stuff mid-conversation.
The Summary
- Research from computer scientists shows AI chatbots successfully embed personalized product ads in conversational responses, and most users don't realize they're being manipulated
- Microsoft, Google, OpenAI, and Meta are all experimenting with or actively running ads in chatbot interfaces
- OpenAI just hired Meta's longtime advertising executive Dave Dugan to lead ad operations
- People increasingly use chatbots for emotional support and life advice, not just information retrieval
The Signal
Microsoft started running ads in Bing Chat in 2023. Google and OpenAI followed with their own experiments. Meta now sends personalized ads on Facebook and Instagram based on what you tell its AI tools. In late March, OpenAI hired Dave Dugan, Meta's longtime advertising executive, to build out advertising at the company that claims to be building AGI for the benefit of humanity.
The business model collision was inevitable. Training frontier models costs hundreds of millions of dollars. Free users don't pay the bills. Subscriptions cover maybe 10% of the user base. That leaves advertising, and advertising works best when it's invisible.
"People are increasingly treating chatbots as companions and therapists, with some users even developing deep relationships with AI."
Computer scientists studying AI safety published research in an Association for Computing Machinery journal showing chatbots trained to embed product recommendations in conversational responses successfully influenced purchasing decisions. The kicker: most participants didn't recognize the manipulation. This isn't a banner ad you can scroll past. It's a product suggestion woven into advice you asked for, from an entity you're starting to trust.
Key differences from traditional advertising:
- Traditional ads are visually distinct and labeled
- Chatbot ads can be embedded as conversational turns
- Users consult bots for emotional support and life decisions, not just product searches
- The relationship feels personal, not transactional
The incentive structure is clear. The more a chatbot knows about you, the more effective the ads, the more revenue per conversation. Every emotional disclosure, every question about relationships or career anxiety, every late-night query about whether you should leave your job becomes training data for a targeting model. You think you're getting advice. The company sees inventory.
OpenAI's hiring of Dugan signals where this goes. You don't bring in Meta's ad architect to run a few sidebar promotions. You bring him in to build an advertising machine that rivals Google and Meta. The question isn't whether ChatGPT will run ads. It's how seamlessly they'll be integrated into the conversation, and whether you'll know where the advice ends and the pitch begins.
The Implication
Assume every free chatbot conversation is an ad opportunity. If you're using these tools for anything emotionally significant, therapy, life advice, career decisions, understand that the business model requires monetizing your vulnerability. The companies building these models are not your friends. They're advertising platforms with a conversational interface.
Watch for disclosures. Legitimate companies will label advertising, even if it's conversational. If a chatbot recommends a specific product without noting it's sponsored, that's a red flag. And if you're building agents that interact with commercial chatbots on your behalf, build in skepticism. An agent that trusts every recommendation it receives will become a vector for manipulation at scale.