01 · FAQ
Frequently asked questions
Can I use my own components in Generative UI?
Yes — Controlled and Declarative Generative UI both let you use your own existing components, designs, and component libraries. In Controlled Generative UI, you register your own components with the agent directly; the agent renders them as-is, preserving whatever design system you ship them with. In Declarative Generative UI, you define a catalog of building-block primitives and provide a renderer for each one — those renderers can be your existing design-system components. The agent assembles UIs from your primitives; the pixels remain yours. Open Generative UI is the exception — there the agent generates the markup itself, and the host renders whatever the agent emits inside a sandbox.
Do I have to rewrite my UI to adopt Generative UI?
No. Most adopters start by registering a handful of components alongside their existing app — a dashboard widget, an agent-rendered chart, a smart form. The agent only renders inside the surfaces you've designated; the rest of your app is unchanged. You can grow band coverage as needs emerge.
How do I keep the agent from rendering broken or off-brand UIs?
This is exactly what the Spectrum exists to manage. Controlled Gen UI is bounded by your component library — the agent literally cannot render anything you didn't ship. Declarative Gen UI is bounded by the typed schema of your catalog. Fully Open Gen UI is bounded by a sandbox and the constraints in the agent's prompt. Pick the band whose bounds match your risk tolerance for that surface.
Which band should I use for my app?
Map your surfaces to the long-tail curve. Build Controlled components for the few highest-traffic, brand-critical surfaces. Use Declarative catalogs for the long tail of internal tools, reports, and contextual UIs. Reserve Fully Open and MCP Apps for third-party integrations and one-off experiences. Picking one band for the whole product is the most common architectural mistake.
Can I mix bands in one product?
Yes, and you should. A single chat session can render a Controlled dashboard component, a Declarative ad-hoc report, and an Open MCP App in sequence. The bands describe how each surface is produced, not where each surface lives.
Is it all in chat?
No — everything on this page works in-chat OR in-app. AG-UI also ships in-app primitives beyond rendering: shared state between the agent and the frontend, in-app actions the agent can invoke directly, and more. Generative UI fits anywhere in your product, not just a chat sidebar.
Does it work with mobile, Slack, … or just React?
It works everywhere. The example in this walkthrough is React for convenience, but everything shown here ships across the stack — React and Next.js, Vue, Svelte, native iOS and Android, React Native, Slack, Microsoft Teams, Discord, voice agents, email assistants, and CLI/terminal surfaces. AG-UI is a horizontal protocol layer: it connects any agent to any frontend, and the frontend is just the transport for the events the agent streams. The Generative UI Spectrum applies to every one of those surfaces.
Are user interactions inside generative UI real?
Yes — events flow both ways. A click, form submit, or drag inside a registered component fires back through the same channel the agent used to render it. In Controlled and Declarative Gen UI, that round-trip is a typed tool call from the frontend; in Fully Open Gen UI, the sandboxed iframe posts messages out. The agent receives the event and decides what to do next.
How does Generative UI relate to MCP?
MCP (Model Context Protocol) is a wire protocol for connecting agents to tools and external servers. Some of those servers ship UI alongside their tools — that's the MCP Apps point on the Spectrum, near the Open end. The other bands are independent of MCP. Put differently: MCP defines part of the transport; Generative UI is a UI paradigm that runs on many transports, MCP among them.
What is AG-UI?
AG-UI is the open agent↔user interaction protocol. It emerged from a partnership between CopilotKit and LangChain, with CrewAI joining shortly after, and has since been adopted across the industry. First-party integrations ship from Google (ADK, A2UI), Microsoft, Amazon (AWS Strands, AWS AgentCore), and Oracle, alongside Mastra, Pydantic, Agno, AG2, and LlamaIndex. The Claude SDK and the OpenAI Agent Framework are also supported. AG-UI is in production at the majority of Fortune 500 and Fortune 1000 companies.
AG-UI, A2UI, MCP-UI — what's the difference?
They sit at different layers. AG-UI is the agent↔user interaction protocol — how agents stream events to and receive events from frontends. A2UI is a declarative UI wire format (a typed catalog of components) that rides on top of AG-UI for the Declarative band. MCP-UI is the UI delivery channel for MCP servers — the surfaces that show up at the MCP Apps point on the Spectrum. They compose rather than compete.
Will agents replace traditional UIs?
All UI will be AI. Over the next few years, the majority of human↔technology interactions will be mediated by agentic systems — and agents, with rich interfaces well beyond text, are becoming the norm for all UI. The Spectrum is the taxonomy of how that mediation happens at each surface: when the developer holds the brush, when the agent does, and everything between.
02 · Where to go next
Where to go next
The Spectrum is a map, not a prescription. Read the AG-UI and A2UI specifications to see the wire formats behind Declarative Gen UI. Open the CopilotKit docs to try useComponent for the Controlled band and a2ui for the Declarative band in a working app. The fastest way to internalize the taxonomy is to ship one surface in each.
Try CopilotKit
useComponent for the Controlled band, a2ui for the Declarative band — in a working app.
Read the AG-UI spec
Open specification for how agents communicate with frontends, including streamed UI events.
Read the A2UI spec
Declarative UI wire format built on AG-UI — a typed catalog of components and instances.
