The AI subscription you signed up for is charging you for gift cards you never bought—and the payment processor says it's your problem.
The Summary
- A Claude Pro subscriber discovered $400 in fraudulent "gift card" charges on his credit card after signing up for the $20/month AI chatbot service, and he's not alone
- The scam exploits the trust gap between legitimate AI subscriptions and payment processors who don't flag unusual charges from the same merchant
- This is the first major consumer fraud pattern to emerge specifically around AI chatbot subscriptions, suggesting payment infrastructure hasn't caught up to the agent economy
The Signal
David Duggan paid $20 monthly for Claude Pro. Normal. Useful. Then two $200 charges appeared on his statement, both labeled as gift cards for the same AI service. The Guardian reports multiple families have spotted similar phantom charges, always structured the same way: a legitimate subscription followed by unauthorized gift card purchases.
The fraud works because AI subscriptions are still new enough that payment processors don't have pattern recognition built for them. A $20 recurring charge from Anthropic looks fine. Then a $200 gift card charge from Anthropic also looks fine—it's the same merchant, so the fraud detection algorithms see consistency, not theft. Traditional scams get flagged because the merchant changes. This one hides inside merchant continuity.
"The payment processor sees the same merchant name twice and calls it a pattern, not a problem."
What makes this particularly nasty:
- Victims are self-selected as AI-engaged users, meaning they're likely to have multiple AI subscriptions and miss one anomaly
- Gift card purchases are hard to reverse once processed, unlike physical goods
- The fraud sits at the intersection of emerging tech and legacy payment rails that weren't built for subscription-as-a-service-to-buy-more-services
Anthropic hasn't commented on whether the fraud originates from compromised accounts, phishing attacks on Claude users, or payment system vulnerabilities. But the pattern is clear: someone is identifying Claude Pro subscribers and exploiting the trust signal that legitimate subscription creates.
The broader problem is infrastructure lag. We're scaling AI agent subscriptions faster than payment systems can build fraud models around them. Traditional e-commerce fraud detection looks for: new merchant, high dollar amount, international transaction, sudden spending spike. AI subscription fraud looks like: existing merchant, plausible amount for premium tier or gift cards, domestic transaction, gradual escalation from $20 to $200.
The Implication
If you're paying for any AI subscription—Claude, ChatGPT Plus, Midjourney, whatever—check your statements line by line for the next three months. Set up transaction alerts for charges above your subscription price from the same merchant. This fraud pattern will spread to every major AI platform because the vulnerability is structural, not specific to Anthropic.
For companies building in this space: payment fraud detection is now part of your security stack, not just your payment processor's problem. If you're scaling subscriptions, you need fraud patterns built for recurring AI services before someone else's exploit becomes your customer service nightmare.