From "Hello World" to "Hello Agents": The Developer Keynote That Rewired Software Engineering

Dev.to / 4/24/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical UsageIndustry & Market MovesModels & Research

Key Points

  • Google Cloud NEXT ’26 Day 2’s Developer Keynote used a 60-minute live-coding exercise (“plan a Las Vegas marathon using AI agents”) to demonstrate how agentic systems are built and operated in practice.
  • The keynote mapped seven demos to an agent development lifecycle—build, orchestrate, remember, debug, deploy, extend, and secure—emphasizing operational realities over hype.
  • It showcased production-style agent architecture as a team of specialized agents (planner, evaluator, simulator, and later a supply-chain/logistics agent) working across multiple environments using standard protocols.
  • The session highlighted the integration of code-first and no-code development, including logistics creation via no-code tools and coordination from the Planner agent.
  • It provided concrete signals for developers who must design, deploy, debug, and secure autonomous/agentic software systems rather than treating AI as a single monolithic “super-agent.”

This is a submission for the Google Cloud NEXT Writing Challenge

Day 1 of Google Cloud Next '26 gave us the vision. Sundar Pichai announced 75% AI-generated code. Thomas Kurian declared the pilot era dead. Amin Vahdat unveiled 8th-generation TPUs. This is Part 2 of my Google Cloud Next '26 coverage. Read Part 1: The 75% Illusion.

Day 2 gave us the instruction manual.

Honestly, I initially thought this was going to be a product sales demo. It turned out to be the most honest technical blueprint I've ever seen from Google Cloud. Watching this keynote felt like being let into a master architect's workshop. Not a presentation. A live operation.

The Developer Keynote, led by Chief Evangelist Richard Seroter and Developer Relations Engineer Emma Twersky, didn't rely on sweeping statements. It was a single, continuous, 60-minute live-coding gauntlet: plan a Las Vegas marathon using nothing but AI agents. What unfolded was not a product demo but a masterclass in the operational reality of agentic software engineering.

The keynote walked through seven distinct demos, each mapped to a phase of the agent development lifecycle: build, orchestrate, remember, debug, deploy, extend, and secure. This article dissects what each demo revealed—and what it means for developers who now have to build, deploy, and live with autonomous systems.

The Demo: Not a Marathon. An Operating System.

The setup was deliberately mundane: plan a marathon. But the architecture was anything but:

  • Planner agent: determines optimal running routes using Google Maps, geographic information systems, and race director guidelines.
  • Evaluator agent: validates those routes against business requirements and municipal regulations.
  • Simulator agent: models crowd behavior along the route using randomized pedestrian agents.
  • Supply chain agent (added later): handles logistics—water stations, portable toilets, medical tents—built entirely via no-code tools and called by the Planner agent through standard protocols.

Three agents. Two deployment environments (Agent Runtime and GKE). Two development paradigms (code-first and no-code). One protocol connecting them all.

This is what production agentic architecture looks like. Not a single super-agent, but a flock of specialized agents that discover each other, delegate work, and reason across boundaries.

Demo 1: Build Agents with Agent Platform

The Planner agent was built using Agent Designer, a visual interface within the Gemini Enterprise Agent Platform. Emma described the agent's behavior in natural language, clicked "Get Code," and the system generated Python code using the Agent Development Kit (ADK).**.

The agent's anatomy is instructive. Every agent comprises three primitives:

  1. Instructions: natural language defining the agent's role and behavior
  2. Skills: executable extensions that connect the agent to external APIs, databases, and scripts
  3. Tools: specific API definitions the agent can invoke, such as Google Maps via managed MCP servers

Google's fully managed remote MCP servers for Google Maps, BigQuery, Compute Engine, and Kubernetes Engine were demonstrated. Apigee now functions as an MCP bridge, translating any standard API into a discoverable agent tool with existing IAM and governance controls inherited automatically.

This matters because tool access has been the silent killer of agent projects. Managing API keys, rate limits, and authentication across dozens of tools creates brittle systems. Managed MCP servers externalize that complexity. The agent simply declares what it needs, and the platform handles the rest.

Key takeaway: The agent platform provides three primitives—instructions, skills, and tools—while managed MCP servers eliminate the API integration burden entirely.

Demo 2: Creating Multi-Agent Systems

The Evaluator agent was deployed as a sub-agent of the Planner. The Simulator agent ran in a separate Agent Runtime instance, communicating with the Planner via the Agent2Agent (A2A) protocol.

A2A uses Agent Cards—cryptographically signed metadata documents declaring what each agent can do, what inputs it accepts, and how to reach it. Agents discover each other through Agent Registry, which functions like DNS for agents: query the registry, find agents with the capabilities you need, then communicate over A2A without complex API contracts.

The significance is architectural. A systematic analysis of 18 agent communication protocols found that the Model Context Protocol (MCP) handles tool access—connecting agents to databases, APIs, and services. A2A handles peer coordination—agent-to-agent negotiation, delegation, and result exchange. Together, they establish a two-layer communication substrate: MCP for the tool layer, A2A for the agent layer. (Yuan et al., 2026)

This two-protocol stack is quietly becoming the TCP/IP of the agentic internet. A2A has reached 150 organizations in production—not pilots—routing real workloads between agents built on Salesforce, ServiceNow, SAP, and Microsoft stacks.

The keynote also demonstrated Agent-to-UI (A2UI), a declarative standard where agents generate user interfaces as structured data rather than rendering code. This eliminates the frontend bottleneck: when the Planner agent was called from the Gemini Enterprise app, the interface was generated dynamically by A2UI, not hand-coded.

Key takeaway: A2A and Agent Registry let agents discover and negotiate with each other—the DNS and HTTP of the agent world.

Demo 3: Enhancing Agents with Memory

Memory is where most agent demos stop working. They handle a single session, then forget everything.

The Developer Keynote addressed this head-on by distinguishing between:

  • Sessions: short-term state maintained during a single interaction (managed natively by Agent Runtime)
  • Memory Bank: long-term, persistent context that survives across sessions and days

The Planner agent used Memory Bank to recall previously planned routes and learned preferences. When a new marathon request arrived, the agent didn't start from zero—it recalled past decisions and adapted them.

But raw memory isn't enough when agents need to reason over unstructured data. The Planner needed to know municipal regulations buried in PDFs. The solution was a RAG pipeline built by a data engineering agent that read PDFs, chunked them using the Lightning Engine for Apache Spark, stored them in AlloyDB, and converted them to vector embeddings automatically via AlloyDB's auto-embedding feature. The Planner agent accessed this knowledge through an AlloyDB remote MCP server.

A life-cycle assessment of Google's TPU hardware found that operational electricity emissions comprise over 70% of lifetime emissions. Memory management matters not just for correctness but for cost and sustainability: every redundant computation is both unnecessary latency and unnecessary carbon. (Schneider et al., 2025)

Key takeaway: Memory Bank provides long-term recall, and the Knowledge Catalog turns unstructured data into directly usable agent context.

Demo 4: Debugging Agents at Scale

When hundreds of agents are running simultaneously, debugging becomes exponentially harder. You cannot attach a debugger to a distributed agent fleet.

The keynote demonstrated Agent Observability, which provides full execution traces for agents deployed on Agent Runtime. But the more impressive capability was Gemini Cloud Assist Investigations: an AI that reads traces, logs, and error data to perform root-cause analysis.

A developer inside their IDE (VS Code, in the demo) used an MCP connection to query Gemini Cloud Assist in natural language: "Why did the route planning fail?" The AI ingested Agent Observability traces and GitHub issues, identified the root cause, suggested fixes, and generated corrected code—all within minutes.

This is observability inverted. Instead of humans interpreting dashboards, AI interprets the system and communicates findings to humans in natural language. The agent doesn't just report what happened; it reasons about why it happened and proposes remediation.

Key takeaway: Observability now works both ways: the system doesn't just report problems—it diagnoses itself.

Demo 5: Intent to Infrastructure

The Simulator agent needed to run on a different stack: Google Kubernetes Engine (GKE) with Gemma 4, Google's open model. Originally deployed on Cloud Run, the infrastructure definition needed migration.

The developer opened their IDE, invoked Gemini Cloud Assist via MCP, and issued a natural language instruction to convert the Cloud Run manifest to GKE. Gemini Cloud Assist served as a translator between human intent and infrastructure configuration, generating the necessary Kubernetes YAML and applying the changes to the live environment.

Cross-Team Orchestration (Croto) research has demonstrated that multi-agent frameworks produce measurably better results when agents operate semi-independently and exchange insights—exactly the pattern Google demonstrated with the Simulator running on separate infrastructure from the Planner, communicating through standardized protocols. (Du et al., 2025)

Key takeaway: "Move this workload to GKE" is now something you say, not something you configure in YAML.

Demo 6: No-Code Agents and Cross-Paradigm Interoperability

The most strategically revealing demonstration involved the Supply Chain agent—a logistics coordinator for water, food, and portable toilets. It was built entirely through Gemini Enterprise app's Agent Designer, a no-code visual interface where business users describe desired automations in plain language.

The key moment: the no-code Supply Chain agent was registered in Agent Registry alongside the full-code Planner agent, and the Planner called it via A2A. The Planner didn't know or care whether the Supply Chain agent was built in Python with ADK or through a visual interface. It only cared about the Agent Card.

This collapses the wall between "developer-built" and "business-built" automation. When both produce A2A-compatible agents registered in the same directory, the distinction becomes irrelevant. The agent ecosystem becomes a flat namespace where capability—not provenance—determines discoverability.

Key takeaway: There is no longer a wall between developer-built and business-built agents—both speak the same language.

Demo 7: Securing the Agent Mesh

The final demo addressed what happens when autonomous agents operate with real credentials in production environments.

Agent Identity assigns each agent a unique, trackable identity—distinct from generic service accounts that can be shared across workloads. Agent Gateway functions as a proxy between agents, applying IAM policies to inter-agent communication and enforcing guardrails on outbound access.

The demo showed a Finance MCP server that was locked down by default. The Planner agent couldn't access budget data until an explicit IAM Allow policy was added on the Agent Gateway with a ReadOnly condition. Outbound internet access for agents was similarly controlled via Egress Agent Policies.

Then Wiz entered. Google's recently acquired security platform scanned the entire agent ecosystem—source code, models, tool connections, and cloud infrastructure—and generated a security graph showing attack paths. The Red Agent (an AI-powered "intelligent attacker") continuously probed for vulnerabilities. The Green Agent proposed remediation, which was applied through Claude Code skills directly from the CLI.

Google's Agent Gateway understands both MCP and A2A protocols natively, providing centralized policy enforcement across all agent communication. This is not perimeter security. It's mesh security—every agent connection is authenticated, authorized, and auditable.

Key takeaway: Agent security is no longer about firewalls. It's about every agent having an identity and every inter-agent connection carrying a policy.

What the Developer Keynote Actually Proved

The seven demos collectively demonstrated something more important than any individual feature: the toolchain is ready.

Agent development has spent two years in the "hackathon phase"—impressive demos that collapse under production load. The Developer Keynote was Google's argument that this phase is over. You can now:

  • Prototype in Agent Designer, graduate to ADK code
  • Deploy to Agent Runtime with built-in session, memory, and observability
  • Connect agents across platforms via A2A without custom glue code
  • Debug distributed agent fleets with AI-assisted root-cause analysis
  • Provision infrastructure by describing intent in natural language
  • Let business users build agents that coexist with developer-built systems
  • Lock down every connection with agent-specific identity and gateway policies

The marathon demo wasn't about running. It was about showing that the pieces compose.

Try It Yourself Right Now

Google provides free access to experiment with this ecosystem:

  • Agent Development Kit (ADK) is open source. You can build a local agent right now with Python or TypeScript.
  • Agent2Agent (A2A) Protocol is available on GitHub with open specifications. Run two simple agents on your laptop.
  • 10 official Codelabs from Next '26 (linked in Sources below) walk you through everything from building a planner agent to securing it with Agent Gateway.

This ecosystem is no longer theoretical. Tonight, you can type adk init and watch your agents negotiate.

What the Keynote Left Unanswered

No keynote covers everything. Three questions stayed with me after the stream ended:

  • Vendor lock-in risk. A2A is open, but the managed MCP servers, Agent Runtime, and Memory Bank are tightly coupled to Google Cloud. How easily can you migrate a production agent mesh to another cloud?
  • Cost observability. Agents call agents call tools. A single marathon planning session could spawn dozens of sub-tasks. Who gets the bill, and how do you trace cost per agent invocation?
  • Agent sprawl governance. When every department can build no-code agents and register them in Agent Registry, who decides what's trustworthy? Agent Identity is a start, but organizational governance models are still undefined.

These aren't criticisms. They're the natural next questions when a platform moves from hackathon to production. I suspect we'll see answers at Next '27.

The Implications for Developers

This keynote articulated a role shift that has been brewing for 18 months. The developer is no longer the primary code producer but the system architect who defines agent topology, designs communication patterns, configures memory strategies, sets governance policies, and monitors emergent behavior.

Recent research on agentic software engineering identifies foundational pillars for this transition: process-level orchestration rather than function-level coding, intent specification rather than imperative implementation, and continuous verification rather than discrete testing. (Yuan et al., 2026)

The Agent Development Kit is now stable across four languages. The A2A protocol is governed by the Linux Foundation and running in 150 production environments. The platform is globally available with a free Express tier for experimentation.

The Developer Keynote wasn't about writing code. It was about defining intent, connecting agents, securing their interactions, and observing their behavior. That's the new job description.

What do you think? Does the A2A protocol change how you'll architect your next application? Have you experimented with ADK yet?

I'm curious: Is multi-agent architecture something you'll deploy in a real project in 2026, or does it still feel too heavy? For those who've tried ADK, what's been your biggest friction point? Drop your experience in the comments.

Sources

Google Cloud Official

  1. Developer Keynote Video — Google Cloud Next '26
  2. Day 1 Recap: Next '26 — Google Cloud Blog
  3. Next '26 Hands-On: 10 Codelabs — Google Cloud Blog
  4. What's New from Firebase at Cloud Next 2026 — Firebase Blog

Industry Analysis

  1. Google Cloud Next 2026: AI Agents, A2A Protocol, Workspace Studio, and the Full-Stack Bet — The Next Web
  2. Google Cloud Next '26 Developer Keynote Report (Japanese) — G-gen Tech Blog
  3. Wiz at Google Cloud Next: Machine-Speed AI Defense — Wiz Blog

Academic Papers

  1. Du, Z., Qian, C., Liu, W., et al. (2025). "Multi-Agent Collaboration via Cross-Team Orchestration." Findings of ACL 2025.
  2. Yuan, D., Lyu, F., et al. (2026). "Beyond Message Passing: A Semantic View of Agent Communication Protocols." arXiv:2604.02369.
  3. Schneider, T., et al. (2025). "Life-Cycle Emissions of AI Hardware: A Cradle-To-Grave Approach and Generational Trends." arXiv:2502.01671.