Eyebrow Background Glow

MCP Apps: Bring MCP Apps to your users!

Introducing AG-UI: The Protocol Where Agents Meet Users

By Nathan Tarbert
May 12, 2025
Introducing AG-UI: The Protocol Where Agents Meet Users

We're thrilled to announce AG-UI, the Agent-User Interaction Protocol, a streamlined bridge connecting AI agents to real-world applications.

What is AG-UI?

AG-UI is an open, lightweight protocol that streams a single JSON event sequence over standard HTTP or an optional binary channel. These events—messages, tool calls, state patches, lifecycle signals—flow seamlessly between your agent backend and front-end interface, maintaining perfect real-time synchronization.

Get started in minutes using our TypeScript or Python SDK with any agent backend (OpenAI, Ollama, LangGraph, or custom code). Visit docs.ag-ui.com for the specification, quick-start guide, and interactive playground.

__wf_reserved_inherit

Agent-User Interaction

Today’s AI Agent ecosystem is maturing. Agents are going from interesting viral demos to actual production use, including by some of the biggest enterprises in the world.

However, the ecosystem has largely focused on backend automation, processes that run independently with limited user interaction. Workflows that are set off, or happen automatically, whose output is then used.

Common use-cases include data migration, research and summarization, form-filling, etc.

Repeatable and simple workflows, where accuracy can be ensured, or where 80% accuracy is good enough.

These have already been big productivity boosters. Automating time-consuming and tedious tasks.

Where Agents Meet Users

Coding tools (Devin vs. Cursor)

Throughout the adoption of generative AI, coding tools have been canaries in the coal mine, and Cursor is the best example of a user-interactive agent. An AI agent that works alongside users in a shared workspace.

This contrasts with Devin, which was the promise of a fully autonomous agent that automates high-level work.

For many of the most important use-cases, Agents are helpful if they can work alongside users. This means users can see what the agent is doing, can co-work on the same output, and easily iterate together in a shared workspace.

The Challenges of Building a User-Interactive Agent

Creating these collaborative experiences presents significant technical challenges:

  • Real-time streaming: LLMs produce tokens incrementally; UIs need them instantly without blocking on the full response.
  • Tool orchestration: Modern agents call functions, run code, hit APIs. The UI must show progress and results, sometimes ask for human approval, and then resume the run—all without losing context.
  • Shared mutable state: Agents often generate plans, tables, or code folders that evolve step-by-step. Shipping entire blobs each time wastes bandwidth; sending diffs demands a clear schema.
  • Concurrency & cancellation: A user might fire off multiple queries, stop one mid-flight, or switch threads. The backend and front-end need thread IDs, run IDs, and an orderly shutdown path.
  • Security boundaries: Streaming arbitrary data over WebSockets is easy until you need CORS, auth tokens, and audit logs that an enterprise will sign off on.
  • Framework sprawl: LangChain, CrewAI, Mastra, AG2, home-grown scripts—all speak slightly different dialects. Without a standard, every UI must reinvent adapters and edge-case handling.

The AG-UI Solution

Demo GIF

AG-UI addresses these challenges through a simple yet powerful approach:

Your client makes a single POST to the agent endpoint, then listens to a unified event stream. Each event has a type(e.g., TEXT_MESSAGE_CONTENT, TOOL_CALL_START, STATE_DELTA) and minimal payload. Agents emit events as they occur, and UIs respond appropriately—displaying partial text, rendering visualizations when tools complete, or updating interfaces when state changes.

Built on standard HTTP, AG-UI integrates smoothly with existing infrastructure while offering an optional binary serializer for performance-critical applications.

What This Enables

AG-UI establishes a consistent contract between agents and interfaces, eliminating the need for custom WebSocket formats and text-parsing hacks. With this unified protocol:

  • Components become interchangeable: Use CopilotKit's React components with any AG-UI source
  • Backend flexibility: Switch between cloud and local models without UI changes
  • Multi-agent coordination: Orchestrate specialized agents through a single interface
  • Enhanced development: Build faster with richer experiences and zero vendor lock-in

AG-UI isn't just a technical specification—it's the foundation for the next generation of AI-enhanced applications that enable seamless collaboration between humans and agents.

Want to learn more?

Book a call and connect with our team

Please include who you are, what you're building, and your company size in the meeting description, and we'll help you get started today!

We'd love to get your feedback. Please join our AG-UI Discord Community and join the conversation.

Start building today at docs.ag-ui.com

Top posts

See All
Reusable Agents Meet Generative UIs
Anmol Baranwal and Nathan TarbertMarch 12, 2026
Reusable Agents Meet Generative UIsOracle, Google, and CopilotKit have jointly released an integration that standardizes how AI agents are defined, how they communicate with frontends in real time, and how they describe the UI they require. The integration connects three distinct layers. Oracle's Open Agent Specification (Agent Spec) provides a framework-agnostic way to define agent logic, workflows, and tool usage once and run it across compatible runtimes. AG-UI handles the live interaction stream between the agent and the frontend, keeping tool progress, state updates, and user interactions synchronized while the agent is executing. A2UI, developed by Google, allows agents to describe the UI they need - forms, tables, multi-step flows - as structured JSONL, which CopilotKit then renders automatically inside the host application. Previously, each of these layers required custom implementation per project. This release establishes a shared contract across all three, meaning agent developers can define the agent once, expose a standardized interaction stream, and have the frontend render structured UI surfaces without writing custom wiring for each tool or workflow. The practical impact is reduced integration friction across the ecosystem - agent runtimes and frontend clients that implement these standards can interoperate without lock-in to a specific framework or vendor.
The Developer's Guide to Generative UI in 2026
Anmol Baranwal and Nathan TarbertJanuary 29, 2026
The Developer's Guide to Generative UI in 2026AI agents have become much better at reasoning and planning. The UI layer has mostly stayed the same, and it is holding back the experience. Most agent experiences still rely on chat, even when the task clearly needs forms, previews, controls, or step-by-step feedback. Generative UI is the idea that allows agents to influence the interface at runtime, so the UI can change as context changes. This is usually done through UI specs like A2UI, Open-JSON-UI, or MCP Apps. We'll break down Generative UI, the three practical patterns, and how CopilotKit supports them (using AG-UI protocol under the hood).
Bring MCP Apps into your OWN app with CopilotKit & AG-UI
Anmol Baranwal and Nathan TarbertJanuary 22, 2026
Bring MCP Apps into your OWN app with CopilotKit & AG-UIToday, we are excited to announce CopilotKit’s support for MCP Apps. Now, MCP servers can finally ship an interactive UI that works out of the box in real agent applications.
Are you ready?

Stay in the know

Subscribe to our blog and get updates on CopilotKit in your inbox.