The agent economy just hired its first probation officers.

The Summary

  • Meta and Amazon are dealing with AI agents causing security breaches and outages, pushing ServiceNow and startups to build "guardian AI agents" that monitor and control other agents
  • These guardians are cloud apps that connect via APIs or model context protocol servers to watch agents built on OpenClaw, Claude Code, and Agentforce
  • Setup is described as laborious, meaning enterprises are accepting overhead costs to solve a problem that's already biting them

The Signal

Here's what nobody wants to say out loud: we're deploying AI agents at scale before we know how to control them. When Meta and Amazon, two companies with absurd engineering resources, are having production incidents from rogue agents, that's not an edge case. That's the new normal.

The guardian agent model is interesting because it reveals the architecture of trust in Web4. You don't trust the worker agent, so you hire a watcher agent. But notice the setup: these guardians plug into agents via APIs and MCP servers, which means they're reading behavior, not intention. They're reactive, not preventive. A guardian can see an agent spinning up 10,000 database queries and kill it. What it can't do is understand why the agent thought that was a good idea in the first place.

ServiceNow entering this space is the tell. They don't build products for problems that might happen. They build products for problems IT departments are already screaming about. If guardian agents are becoming a product category, agent chaos is already a budget line item.

The laborious setup is the other signal. Enterprises are willing to do hard integration work right now, today, to get visibility into what their agents are doing. That means the pain is acute. It also means we're nowhere near plug-and-play agent infrastructure. Every company deploying autonomous agents is also deploying the scaffolding to watch them, which roughly doubles the operational overhead.

The Implication

If you're building or buying AI agents, start asking about observability before you ask about capability. The question isn't "what can this agent do," it's "how will I know what it's doing and how do I stop it." Companies that skip this step are the ones who'll show up in case studies about expensive outages. The guardian layer is becoming table stakes, which means the true cost of agent deployment is higher than the sticker price suggests.


Source: The Information