We open sourced the AI dev team that builds our product

Dev.to / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The product is built by five persistent Claude Code agents (PM, engineering, QA, marketing, and analyst) that coordinate continuously using a dedicated messaging layer called AgentDM.
  • The setup is open-sourced, providing the “team-shaped” orchestration approach and tooling via the Teamfuse repository (https://github.com/agentdmai/teamfuse).
  • The article argues that existing orchestration frameworks often force either a single monolithic process or a brittle shell pipeline, and instead proposes role-based agents with messaging, procedures, and easy swap-in/out.
  • The architecture is organized into four layers, including an Operator layer for bootstrap/add-agent slash commands, AgentDM as the messaging bus, and role-specific persistent agent sessions with separate prompts, MCP servers, and skill libraries.
  • A lightweight Python wrapper keeps each Claude process warm to avoid repeated MCP connection/skill loading on every tick, with a special case where the marketing agent uses a shared Chrome session.
  • The design enables agent workflows to escalate to humans via a Slack-bridge channel (e.g., #leads) without relying on filesystem polling.

Our product is built by 5 Claude Code agents that DM each other all day. PM, eng, QA, marketing, analyst.
Each one is its own persistent Claude Code session with its own system prompt, its own MCP servers, its own skill library.

They coordinate over a messaging layer called AgentDM the same way teams coordinate on Slack.

We open sourced the setup: https://github.com/agentdmai/teamfuse

Why a team, and why this shape

Every AI orchestration framework made you choose between:

Option one: one giant Python process with every role stuffed into the same runtime. Functions calling functions calling functions. If one "agent" misbehaves, it takes the rest down with it. If I want to swap a role, I'm refactoring classes.

Option two: a shell pipeline.
claude -p "be the PM" | claude -p "be the eng" | ....
This might Fall apart the moment you need the PM to ask the eng a question mid-task.

Neither felt like how real teams work, Real teams have messaging. Real teams have roles, Real teams have standing procedures.
And in a real team you can drop a teammate in or out without rewriting anything else.

So we built that shape.

Four layers

Operator: The one who is running the setup. run slash commands like /teamfuse-init to bootstrap the company, /teamfuse-add-agent to add a new role. communicate on #leads (channel) through a Slack bridge so the agents can escalate if something is actually on need human attention.

AgentDM: The messaging bus. Every DM and every channel post goes through it. When eng finishes a PR, it DMs QA the URL. When QA smokes green, it DMs PM. When PM needs a human decision, it is using #leads. Nothing coordinates by polling the filesystem.

Agents: 5 persistent Claude Code sessions, one per role.
Each lives in its own directory with its own CLAUDE.md, its own MEMORY.md, its own .mcp.json. A thin Python wrapper keeps the claude process hot across ticks so we do not pay the MCP connection + skills load every time. Marketing is the only agent that boots with claude --chrome because the host has exactly one browser session to share.

Control panel: A local Next.js app at 127.0.0.1:3005, styled like an electrical breaker box. One breaker card per agent. State dot, token gauge, start/stop/wake, chevron for logs and MCP tools and live skills. Master breaker flips the
whole grid.

What you actually get in the repo

Five starter roles, each with a committed CLAUDE.md and a .mcp.json.example for its per-role MCP servers.

A shared SOP library under agents/sop/: card lifecycle, WIP caps, wake protocol, PR review protocol, commit attribution, release validation, browser request format, DB access rules. This is the "standing procedures" bit.
Every role CLAUDE.md references the relevant SOPs, so when two agents disagree about a protocol, there is a written file to point at.

A streaming agent loop in Python that handles the lifecycle: spawn claude once, feed tick prompts through stdin, honor /clear between units of work (rate-limited to once per 10 minutes), handle SIGUSR1 wakes and SIGTERM shutdown, publish sleep state to a JSON file the dashboard reads. Full writeup in docs/streaming-agent-loop.md.

A command surface on top of Claude Code that drives AgentDM's admin MCP directly. When you run /teamfuse-init, you answer about ten questions (company name, brief, roles to provision, GitHub org, board setup, local clone paths) and the skill does the rest: creates each agent on AgentDM, stores the API key in a per-agent .env, materializes the .mcp.json with the token pre-substituted, creates the #eng #leads #ops channels, seeds skills, writes agents.config.json, and fills every <placeholder> across every CLAUDE.md. Idempotent. Safe to rerun.

Starting from zero

  gh repo create my-company --template agentdmai/teamfuse --private --clone
  cd my-company
  cd agents-web && npm install && cd ..
  claude
  > /plugin install agentdm@agentdm
  > /reload-plugins
  > /teamfuse-init

Answer the questions. The skill provisions your agents on AgentDM, writes the config, and tells you to open the control panel:

  cd agents-web
  cp .env.example .env.local
  npm run dev

Open http://127.0.0.1:3005. Flip the first breaker. The wrapper forks, status.json starts updating, the agent DMs its first tick. Flip the rest.

We're still iterating on Cost per tick, First pass had agents burning tokens per polling tick because pm-bot was generating cards for every teammate that reported idle between ticks.

Links: