MCP Apps: Bring MCP Apps to your users!

Introducing the AG-UI Dojo: A Learning Tool to Help You Ship Faster
By Nathan Tarbert and Eli Berman
September 10, 2025

Introducing the AG-UI Dojo: A Learning Tool to Help You Ship Faster

Building AI agent frontends? You've probably hit the usual roadblocks: streaming updates that break, state bugs that make no sense, and tool calls that require way too much setup.

The AG-UI Dojo fixes that problem. It's a collection of working examples you can actually visualize and use.

What Makes Up the Dojo?

We built this with three parts:

  • Preview: Run the demo. See how it actually behaves.
  • Code: Check out the implementation. No hunting through repos.
  • Docs: Official specs and guides, right next to the code.

You can visualize it, understand it, and then use it. Our goal was to make it simple for developers who just want to see something working fast.

What the Demos Cover

Six core features, in its own "hello world" style examples:

  • Agentic Chat: Streaming chat with tool hooks built in
  • Human-in-the-Loop: Agent planning that waits for user input
  • Agentic Generative UI: Long tasks with UI that updates as you go
  • Tool-Based Generative UI: A haiku generator that renders images nicely in the chat
  • Shared State: Agent and UI stay perfectly synced
  • Predictive State Updates: Real-time collaboration with the agent

Each demo shows one building block. Put them together, and you get real agent apps.

A Learning Tool

The Dojo works as both a learning tool and a debugging resource. Our documentation calls it "learning-first" - you walk through each capability and see the code that makes it work.

It's also your implementation checklist for building new integrations. Run through the demos, and you'll know your AG-UI setup handles everything properly.

Common problems: event ordering, payload issues, and state sync bugs.

You can troubleshoot all of that here before you ship.

Faster Development

Three things:

  1. Faster learning - see the code working, not just sitting in documentation
  2. Less complexity - bite-sized demos instead of overwhelming examples
  3. Better debugging - validate your setup before production

Getting Started

Try the Dojo. Click through the previews, check the code tabs, and read the docs.

Use it when you're building your own agent UIs. These demos work as reference implementations and testing grounds for your integration. Check out the Dojo source code here.

We’d love to know how this tool has helped your team. Reach out to book a meeting; we take your feedback seriously.

Want to learn more?

Happy Building!

Top posts

See All
The Developer's Guide to Generative UI in 2026
Anmol Baranwal and Nathan TarbertJanuary 29, 2026
The Developer's Guide to Generative UI in 2026AI agents have become much better at reasoning and planning. The UI layer has mostly stayed the same, and it is holding back the experience. Most agent experiences still rely on chat, even when the task clearly needs forms, previews, controls, or step-by-step feedback. Generative UI is the idea that allows agents to influence the interface at runtime, so the UI can change as context changes. This is usually done through UI specs like A2UI, Open-JSON-UI, or MCP Apps. We'll break down Generative UI, the three practical patterns, and how CopilotKit supports them (using AG-UI protocol under the hood).
Bring MCP Apps into your OWN app with CopilotKit & AG-UI
Anmol Baranwal and Nathan TarbertJanuary 22, 2026
Bring MCP Apps into your OWN app with CopilotKit & AG-UIToday, we are excited to announce CopilotKit’s support for MCP Apps. Now, MCP servers can finally ship an interactive UI that works out of the box in real agent applications.
How to build a Frontend for LangChain Deep Agents with CopilotKit!
Anmol Baranwal and Nathan TarbertJanuary 20, 2026
How to build a Frontend for LangChain Deep Agents with CopilotKit!LangChain recently introduced Deep Agents: a new way to build structured, multi-agent systems that can plan, delegate, and reason across multiple steps. It comes with built-in planning, a filesystem for context, and subagent spawning. But connecting that agent to a real frontend is still surprisingly hard. Today, we will build a Deep Agents powered job search assistant and connect it to a live Next.js UI with CopilotKit, so the frontend stays in sync with the agent in real time.
Are you ready?

Stay in the know

Subscribe to our blog and get updates on CopilotKit in your inbox.