OpenAI's new revenue chief just told employees what everyone already knows but won't say out loud: users treat AI models like gas stations—they'll switch for a penny cheaper.
The Summary
- OpenAI CRO Denise Dresser sent a four-page internal memo emphasizing the need to "lock in users" and build competitive moats as model-hopping becomes standard behavior
- The company is pivoting hard toward enterprise clients, acknowledging consumer switching costs are near zero
- This signals OpenAI recognizes its GPT-4 crown is temporary and repeatability matters more than breakthroughs
The Signal
Denise Dresser took over most of former COO Brad Lightcap's duties and immediately addressed OpenAI's core strategic vulnerability. Her Sunday memo to employees reads like someone who just inherited a business with a retention problem. The language choices matter. "Lock in users." "Build a moat." These aren't the words of a dominant platform confident in its network effects.
They're the words of a company watching Anthropic, Google, and open-source alternatives chip away at what looked like an insurmountable lead 18 months ago. Claude tops the leaderboard one week. Gemini the next. GPT-5 when it drops. Users don't care about the underlying architecture. They care about which bot gives them the best answer today.
"Users treat AI models like gas stations—they'll switch for a penny cheaper."
The enterprise pivot makes tactical sense but reveals strategic anxiety. Consumer switching costs are effectively zero. You can use ChatGPT Monday, Claude Tuesday, and Perplexity Wednesday without changing your workflow. No migration. No integration hell. Just a different URL. Enterprise is different. Once you've embedded ChatGPT into Slack, trained your team on custom GPTs, and built workflows around the API, switching hurts.
Dresser's memo prioritizes three things:
- Enterprise lock-in through integration and custom tooling
- Product stickiness beyond raw model performance
- Revenue predictability from contracts, not volatile consumer subscriptions
This explains why OpenAI has been pushing ChatGPT Enterprise and professional tiers harder than consumer features. The real competition isn't Anthropic's latest benchmark. It's whether OpenAI can build distribution advantages that outlive any single model generation.
The timing matters too. Lightcap shifting to "special projects" while Dresser takes revenue operations suggests OpenAI is reorganizing around commercialization, not just research breakthroughs. That's the move you make when being first no longer guarantees being biggest.
The Implication
Watch how OpenAI prices its enterprise products over the next six months. If Dresser is serious about moats, expect aggressive bundling, long-term contract incentives, and deeper integrations with tools companies already use. The consumer product becomes a top-of-funnel demo. The enterprise contracts become the actual business.
For anyone building on OpenAI's API, this memo is a warning. If your competitive advantage is "we use the best model," you don't have one. Differentiation lives in the workflow, the data, the vertical specialization—not the foundation model you're calling.