Google Introduces A2UI (Agent-to-User Interface): An Open Source Protocol for Agent Driven Interfaces
Kawish Hussain
December 24, 2025
94 views
Google has open sourced A2UI, an Agent to User Interface specification and set of libraries that lets agents describe rich native interfaces in a declarative JSON format while client applications render them with their own components.
The Problem Agents Weren't Meant to Solve (Until Now)
Imagine this: You're chatting with an AI agent about booking a restaurant. You ask for a table, and instead of getting a neat form with date picker and time selector, you get a wall of text asking you questions one by one. Click, read, answer. Click, read, answer. Repeat ad nauseam.
That's the problem A2UI solves. And frankly, it's a problem that deserves solving.
What is A2UI, Really?
Google's new open standard lets agents do something they couldn't do before: speak UI. Not through HTML. Not through JavaScript. Through a clean, declarative JSON payload that describes components, their properties, and data models.
Here's the magic part: your client application—whether it's built with React, Flutter, SwiftUI, or Angular—takes that JSON and renders it with its own native widgets. One format, infinite rendering possibilities.
Security Without the Handcuffs
The genius of A2UI lies in what it doesn't allow. It's declarative data, not executable code. No arbitrary script execution. No trust nightmares.
The client maintains a curated catalog of trusted components—Button, TextField, Card, and so on. Agents can only reference what's in that catalog. It's like giving someone access to your LEGO bricks but not your power tools. Safe, controlled, and still incredibly flexible.
This matters more than you'd think when you're dealing with agents across trust boundaries. The code stays on the client. The UI spec comes from the agent. Clean separation of concerns.
Built for Language Models (and Humans)
A2UI's internal format is optimized for how language models actually work. Instead of deeply nested component trees, it uses a flat list of components with identifier references. Why? Because LLMs are better at generating flat structures, and it makes streaming a breeze.
Which brings us to progressive rendering. Since A2UI is designed for streaming, clients can show partial interfaces while the agent is still computing. Users see something immediately. Better UX. Less waiting.
The Architecture: Simple and Modular
The flow is elegantly straightforward:
- User sends a message to an agent
- Agent generates an A2UI response (that JSON payload)
- A2UI streams to the client over whatever transport you choose
- Client renderer library maps components to native widgets
- User interactions send events back to the agent
Transport agnostic means you're not locked in. Already playing well with the A2A protocol, and extensible to others.
Why This Matters for Frontend Developers
As a frontend developer, you get to stay in your ecosystem. React? Use React components. Native mobile? Use your native components. The agent doesn't need to know or care. It just describes what it wants, and your framework handles the rest.
It's also early stage public preview (v0.8), released under Apache 2.0, so you can experiment without worrying about licensing headaches.
The Bottom Line
A2UI solves a real problem: how do you let remote agents present rich, interactive interfaces without sending code across trust boundaries? By treating UI as data, not logic. By keeping rendering local, description remote. By making it work with whatever framework you're already using.
It's the kind of thoughtful design that makes you wonder why it took this long to exist.