The modern data stack just aged out—not because it failed, but because its users stopped being human.
The Summary
- Google launched Agentic Data Cloud at Cloud Next, rebuilding enterprise data architecture for AI agents that act autonomously instead of humans who ask questions
- Knowledge Catalog auto-curates metadata from query logs; cross-cloud lakehouse queries AWS S3 via BigQuery with no egress fees; Data Agent Kit turns VS Code into natural language pipeline builder
- "We're moving from human scale to agent scale," says Google's Andi Gutmans—the first major cloud vendor to officially declare the dashboarding era over
The Signal
Google just said the quiet part loud: the entire enterprise data stack was architected for the wrong end user. Every tool you've deployed in the last decade—Snowflake, Databricks, dbt, Looker—was optimized for humans running scheduled queries, staring at dashboards, and making decisions during business hours. That model assumed intelligence lived in human heads and data platforms were just suppliers. Google's Agentic Data Cloud flips that assumption: agents are now the primary users, and they don't ask questions. They take action.
The architecture shift is structural, not cosmetic. Knowledge Catalog infers semantic metadata by watching query patterns instead of waiting for data stewards to tag columns manually. That's the difference between a system designed for human governance cadence and one designed for agent velocity. When an agent needs to understand what "revenue" means across twelve data sources at 3am on a Sunday, it can't wait for the Tuesday governance meeting.
"The data architecture has to change now. We're moving from human scale to agent scale."
The cross-cloud lakehouse play is equally telling. BigQuery can now query Iceberg tables sitting on AWS S3 via private network with zero egress fees. That's not a feature—it's a bet that agents will operate across infrastructure boundaries that humans carefully avoided for cost reasons. When your agent economy spans clouds, vendor lock-in becomes an agent-scale bottleneck.
The Data Agent Kit is where this gets concrete for practitioners:
- Natural language pipeline definitions replace SQL and Python transforms
- MCP (Model Context Protocol) tools drop into VS Code, Claude Code, and Gemini CLI
- Engineers describe outcomes; agents generate the data flows
This is Google admitting that if agents are writing half your code anyway, your data tooling should speak their language natively. The modern data stack was built for SQL-literate humans. The agentic data stack is built for LLMs that learned SQL from the internet and would rather talk in English.
Gutmans framed this as moving "from system of intelligence to system of action." That language matters. Intelligence implies analysis, insight, recommendations—outputs for humans to act on. Action implies the platform itself executes. The difference is who holds the steering wheel. In the human-scale world, data platforms inform decisions. In the agent-scale world, they are the decision layer.
The Implication
If you're a data engineer, this is your heads-up that the job description just shifted. The skill isn't writing perfect dbt models anymore—it's teaching agents what outcomes matter and how to verify they got there. If you're buying data infrastructure, ask whether it was designed for scheduled queries or continuous agent operations. The companies still optimizing for human dashboards are selling you last decade's architecture at next decade's prices.
Watch how fast the other clouds follow. Google moved first, but AWS and Azure are rebuilding the same stack right now. The agent economy doesn't wait for vendor roadmaps.