Series A announced

Read More
Copilot Kit Logo

A reference taxonomy

What is the Generative UI Spectrum?

‘Generative UI’ is a term that refers to the family of UI paradigms which are both enabled by LLMs and agents, and useful for interacting with the agentic applications they power.

Over the past two years, the CopilotKit team has helped the leaders in this space bring widely-used agentic products to production. What we’ve learned is that no single Generative UI approach is appropriate in all circumstances. Instead, Generative UI solutions live along a spectrum — the Generative UI Spectrum — which runs from full developer control over every pixel, to full agent autonomy over the rendered surface.

In this interactive walkthrough, we will explore the three pillars of the Generative UI Spectrum:

  • Controlled Generative UI — developers make pre-defined components available for the agent to render; the agent chooses which component to show.
  • Declarative Generative UI — developers declare catalogs of lego-like building blocks that the agent assembles on demand at runtime.
  • Open-Ended Generative UI — the agent either embeds 3rd-party applets, or generates fully custom UIs for rich visualizations, on the fly at runtime.

Controlled Generative UI

The workhorse of Generative UI.

With Controlled Generative UI, developers ship a fixed set of pre-defined components and register those components with the agent. At runtime, the agent chooses which component to render and with what data.

Code

TSX
// register a developer-defined component the agent can render
useComponent({
  name: "pieChart",
  description: "Displays a pie chart.",
  parameters: z.object({
    title: z.string(),
    description: z.string(),
    data: z.array(z.object({
      label: z.string(),
      value: z.number(),
    })),
  }),
  render: MyPieChart,
})

// the component the agent will render — written by you, in your design system
function MyPieChart({ title, description, data }) {
  return <>...</>
}

Best for

Controlled Generative UI is the ‘workhorse’ of the Generative UI Spectrum. In a sense it is the most ‘boring’ variant — the agent's only job is to choose which pre-built component fits the moment, never to invent the rendered surface itself.

But that predictability is exactly what makes it the right tool for the most-used surfaces of a product. For example, an airline would want its flight tickets to show up in exactly the same way every time, for pixel-perfection and maximal end-user predictability.

Few most-used surfaces(e.g. flight tickets,booking confirmations)

Declarative Generative UI

Where the long tail lives.

With Declarative Generative UI, developers ship a catalog of composable building blocks — a vocabulary of primitives the agent can compose with. At runtime, the agent decides how to assemble those primitives into a UI tree for each request.

Code

TSX
// catalog definitions — describe the building block components to the agent
export const catalogDefinitions = {
  Card: {
    description: "A titled card container.",
    props: z.object({ title: z.string(), subtitle: z.string().optional() }),
  },
  PrimaryButton: {
    description: "A styled primary button.",
    props: z.object({ label: z.string(), action: z.any().optional() }),
  },
} satisfies CatalogDefinitions

// catalog renderers — how each primitive renders in the DOM (React, in this example)
export const catalogRenderers = {
  Card: MyCard,
  PrimaryButton: MyPrimaryButton,
} satisfies CatalogRenderers<typeof catalogDefinitions>

// definitions + renderers together define a catalog declaration
const catalog = createCatalog(catalogDefinitions, catalogRenderers)

<CopilotKit
  runtimeUrl="/api/copilotkit"
  a2ui={{ catalog }}
>
  <CopilotChat />
</CopilotKit>

Best for

Declarative Generative UI is where the long tail lives. It trades some pixel-perfection (every UI is a combination of pre-specified building blocks) and determinism (the agent does the assembling) for far more breadth.

That trade-off is exactly what makes it the right tool for the long tail of secondary interactions in consumer apps — where having something to show is more important than perfect predictability or pixel-perfection. For example, going back to the airline example, the lost-and-found interactions can take advantage of Declarative Generative UI.

Declarative Generative UI is also a good fit for internal enterprise applications, where again, pixel-perfection and a perfectly deterministic user experience are less important than an efficient implementation. With Declarative Generative UI, you define the component catalog once and let the agents assemble UIs as needed.

Internal enterprise+ long-tail consumer(e.g. lost & found,internal dashboards)

MCP Apps

Inject 3rd-party apps into your agentic app.

With MCP Apps, developers inject 3rd-party surfaces directly into their own agentic application via embedded iframes. At runtime, those surfaces load inside a sandbox; the agent and user interact with them directly.

The AG-UI MCP Apps handshake lets you bring the same applications that were designed for the ChatGPT and Claude App Stores into your own custom agents and agentic applications.

Code

TSX
<CopilotKit
  runtimeUrl="/api/copilotkit"
  MCPApps={[
    "excalidraw.mcp.com",
    "spreadsheet.mcp.com",
  ]}
>
  <CopilotChat />
</CopilotKit>

Best for

MCP Apps are designed primarily for the “super hosts” (ChatGPT, Claude, Cursor, etc.) rather than for your own applications and agents. The iframe architecture allows for truly open-ended experiences controlled by a remote service.

The indirection and decoupling that come with this mean it's less suitable for your own use-cases — for the same reasons you wouldn't display an app within an app in standard development unless forced to. It's also not yet a good fit for non-web surfaces such as mobile or Slack (today the integration is iframe-only).

Custom & 3rd-party UI(e.g. fully custom apps,3rd-party MCP apps)

Fully Open Generative UI

Where the agent owns the canvas.

With Open Generative UI, the agent owns the entire visual surface. It returns a complete UI — typically HTML, SVG, or a remote app URL — which the host renders inside a sandbox. At runtime, the agent has full autonomy over markup, layout, and styling.

Code

TSX
// hand the canvas to the agent — open-ended HTML in sandboxed iframes
<CopilotKit
  runtimeUrl="/api/copilotkit"
  openGenerativeUI={true}
>
  {/* Agent-generated HTML renders in sandboxed iframes automatically */}
  <CopilotChat />
</CopilotKit>

Best for

Open Generative UI represents the full promise of agentic UI — the agent owns the entire visual surface, generating markup, layout, and styling on the fly to give the user exactly what it thinks they want to see. It is the only point on the Spectrum where the agent, not the developer, decides what shape the answer takes.

That freedom comes at a real cost. Open Generative UI is more error-prone, less predictable, slower, and more expensive than the bands to its left — realistically still mostly experimental today. As LLMs become smarter, that cost shrinks and the band becomes increasingly relevant. For now, it is best suited to one-off visualizations where ‘good enough and surprising’ beats ‘pixel-perfect and predictable’ — for example, an agent building a bespoke mini-app on the fly to answer a specific user question.

Custom & 3rd-party UI(e.g. fully custom apps,3rd-party MCP apps)

01 · FAQ

Frequently asked questions

Can I use my own components in Generative UI?

Yes — Controlled and Declarative Generative UI both let you use your own existing components, designs, and component libraries. In Controlled Generative UI, you register your own components with the agent directly; the agent renders them as-is, preserving whatever design system you ship them with. In Declarative Generative UI, you define a catalog of building-block primitives and provide a renderer for each one — those renderers can be your existing design-system components. The agent assembles UIs from your primitives; the pixels remain yours. Open Generative UI is the exception — there the agent generates the markup itself, and the host renders whatever the agent emits inside a sandbox.

Do I have to rewrite my UI to adopt Generative UI?

No. Most adopters start by registering a handful of components alongside their existing app — a dashboard widget, an agent-rendered chart, a smart form. The agent only renders inside the surfaces you've designated; the rest of your app is unchanged. You can grow band coverage as needs emerge.

How do I keep the agent from rendering broken or off-brand UIs?

This is exactly what the Spectrum exists to manage. Controlled Gen UI is bounded by your component library — the agent literally cannot render anything you didn't ship. Declarative Gen UI is bounded by the typed schema of your catalog. Fully Open Gen UI is bounded by a sandbox and the constraints in the agent's prompt. Pick the band whose bounds match your risk tolerance for that surface.

Which band should I use for my app?

Map your surfaces to the long-tail curve. Build Controlled components for the few highest-traffic, brand-critical surfaces. Use Declarative catalogs for the long tail of internal tools, reports, and contextual UIs. Reserve Fully Open and MCP Apps for third-party integrations and one-off experiences. Picking one band for the whole product is the most common architectural mistake.

Can I mix bands in one product?

Yes, and you should. A single chat session can render a Controlled dashboard component, a Declarative ad-hoc report, and an Open MCP App in sequence. The bands describe how each surface is produced, not where each surface lives.

Is it all in chat?

No — everything on this page works in-chat OR in-app. AG-UI also ships in-app primitives beyond rendering: shared state between the agent and the frontend, in-app actions the agent can invoke directly, and more. Generative UI fits anywhere in your product, not just a chat sidebar.

Does it work with mobile, Slack, … or just React?

It works everywhere. The example in this walkthrough is React for convenience, but everything shown here ships across the stack — React and Next.js, Vue, Svelte, native iOS and Android, React Native, Slack, Microsoft Teams, Discord, voice agents, email assistants, and CLI/terminal surfaces. AG-UI is a horizontal protocol layer: it connects any agent to any frontend, and the frontend is just the transport for the events the agent streams. The Generative UI Spectrum applies to every one of those surfaces.

Are user interactions inside generative UI real?

Yes — events flow both ways. A click, form submit, or drag inside a registered component fires back through the same channel the agent used to render it. In Controlled and Declarative Gen UI, that round-trip is a typed tool call from the frontend; in Fully Open Gen UI, the sandboxed iframe posts messages out. The agent receives the event and decides what to do next.

How does Generative UI relate to MCP?

MCP (Model Context Protocol) is a wire protocol for connecting agents to tools and external servers. Some of those servers ship UI alongside their tools — that's the MCP Apps point on the Spectrum, near the Open end. The other bands are independent of MCP. Put differently: MCP defines part of the transport; Generative UI is a UI paradigm that runs on many transports, MCP among them.

What is AG-UI?

AG-UI is the open agent↔user interaction protocol. It emerged from a partnership between CopilotKit and LangChain, with CrewAI joining shortly after, and has since been adopted across the industry. First-party integrations ship from Google (ADK, A2UI), Microsoft, Amazon (AWS Strands, AWS AgentCore), and Oracle, alongside Mastra, Pydantic, Agno, AG2, and LlamaIndex. The Claude SDK and the OpenAI Agent Framework are also supported. AG-UI is in production at the majority of Fortune 500 and Fortune 1000 companies.

AG-UI, A2UI, MCP-UI — what's the difference?

They sit at different layers. AG-UI is the agent↔user interaction protocol — how agents stream events to and receive events from frontends. A2UI is a declarative UI wire format (a typed catalog of components) that rides on top of AG-UI for the Declarative band. MCP-UI is the UI delivery channel for MCP servers — the surfaces that show up at the MCP Apps point on the Spectrum. They compose rather than compete.

Will agents replace traditional UIs?

All UI will be AI. Over the next few years, the majority of human↔technology interactions will be mediated by agentic systems — and agents, with rich interfaces well beyond text, are becoming the norm for all UI. The Spectrum is the taxonomy of how that mediation happens at each surface: when the developer holds the brush, when the agent does, and everything between.

02 · Where to go next

Where to go next

The Spectrum is a map, not a prescription. Read the AG-UI and A2UI specifications to see the wire formats behind Declarative Gen UI. Open the CopilotKit docs to try useComponent for the Controlled band and a2ui for the Declarative band in a working app. The fastest way to internalize the taxonomy is to ship one surface in each.