Anthropic's source code leak just gave us the clearest look yet at where AI assistants are actually heading, and it's weirder than you think.
The Summary
- Claude Code's latest update accidentally shipped with 512,000+ lines of TypeScript source code, exposing Anthropic's roadmap before they were ready to show it
- Users found references to a Tamagotchi-style "pet" mode and always-on agent capabilities in the leaked codebase
- The leak reveals Anthropic's actual system prompts, memory architecture, and feature priorities, not marketing copy
The Signal
Most source code leaks are boring. This one isn't. What users found in those 512,000 lines tells us where Anthropic thinks the assistant market is going, and it's not toward better chatbots.
The Tamagotchi reference is the tell. Anthropic is building persistent agents that develop over time, that you need to "care for" in some sense. This isn't a tool you open when you need something. It's a digital entity that exists whether you're actively using it or not. The always-on agent capability confirms this. They're building something that runs in the background, watching, learning, acting on your behalf without constant prompting.
The memory architecture details matter most. How an AI remembers you determines what kind of relationship you can have with it. Generic chatbots have session memory that dies when you close the window. What Anthropic is building, based on the leaked code, maintains persistent context across sessions. It knows what you worked on last week. It knows what you care about. It becomes more useful the longer you use it, not just within a conversation but across months.
This is the first major lab publicly moving toward agents as companions rather than tools. Google and OpenAI talk about agents. Anthropic is apparently building one that has needs, that grows, that sticks around. The always-on piece is crucial. An agent that only works when summoned is still just a fancy command line. An agent that's always there, always watching for opportunities to help, is something different. Something closer to what science fiction promised and what the agent economy actually needs to function.
The Implication
If this roadmap is real, start thinking about AI assistants less like software and more like relationships. The winning agents won't be the ones with the best one-off performance. They'll be the ones you can't imagine uninstalling because they know too much about how you work. Anthropic just showed their hand. They're betting persistent, always-on agents beat on-demand tools. Watch how fast the other labs follow.
Source: The Verge AI