AgentScope just shipped realtime voice, database memory, and A2A protocol in a framework that treats LLMs like they can actually think instead of puppeteering them with prompts.
The Summary
- AgentScope 2.0 is a production-grade agent framework designed for "increasingly agentic LLMs" with built-in finetuning, deployment tooling, and ecosystem integrations
- The philosophical shift: let models reason and use tools instead of constraining them with "strict prompts and opinionated orchestrations"
- Just added realtime voice agents, database-backed memory with compression, Agent-to-Agent protocol support, and Kubernetes deployment with OpenTelemetry
The Signal
Most agent frameworks treat LLMs like chatbots with extra steps. Chain-of-thought prompts, rigid workflows, handholding every decision. AgentScope is betting the other direction: that models are getting good enough to deserve actual autonomy. The framework ships with ReAct agents, tool use, memory systems, and planning capabilities, but the architecture assumes the model does the heavy lifting, not the orchestration layer.
This matters because we're at an inflection point. Six months ago, you needed elaborate prompt engineering to get an agent to check email without hallucinating. Now GPT-4 and Claude can reason through multi-step tasks with minimal guardrails. AgentScope is infrastructure for that world. Production-ready means you can deploy locally, serverless, or on Kubernetes. Built-in OpenTelemetry means you can actually observe what your agents are doing in production, not just in demos.
"We design for increasingly agentic LLMs. Our approach leverages the models' reasoning and tool use abilities."
The recent feature velocity tells you where the puck is going:
- Realtime voice agents (February 2026): your agent can talk and listen simultaneously, not just transcribe-then-respond
- Database memory with compression (January 2026): agents remember past interactions without context window bloat
- Agent-to-Agent protocol support (December 2025): standardized way for autonomous agents to coordinate without human middleware
The A2A protocol integration is the quiet big deal. When agents can natively talk to other agents, you stop building monolithic systems and start composing specialist agents. One agent handles customer inquiry, hands off to another for order processing, which coordinates with inventory management. No human passing messages. No brittle API contracts that break when requirements shift.
The Implication
If you're building AI products, the question isn't whether to use agents anymore. It's whether your framework assumes models are tools or teammates. AgentScope is infrastructure for the teammate world. Watch how many production deployments use the Kubernetes + OTel stack. That's the tell for whether this is still prototype theater or actually running critical systems.
For developers, the 5-minute quickstart claim is checkable. If it's real, this is how agent development becomes normal software development. Not a research project. Just another service to deploy.