I've been building a system where multiple AI agents execute structured work under explicit governance rules. Sharing it because the architecture might be interesting to people building multi-agent systems.
What it does: You set a goal. A coordinator agent decomposes it into tasks. Specialized agents (developer, designer, QA, etc.) execute through controlled tool access, collaborate via explicit handoffs, and produce artifacts. QA agents validate outputs. Escalations surface for human approval.
What's different from CrewAI/AutoGen/LangGraph:
The focus isn't on the agent — it's on the governance and execution layer around the agent.
- Tool calls go through an MCP gateway with per-role permission checks and audit logging
- Zero shared mutable state between agents — collaboration through structured handoffs only
- Policy engine with configurable approval workflows (proceed/block/timeout-with-default)
- Append-only task versioning — every modification creates a new version with author and reason
- Built-in evaluation engine that scores tasks on quality, iterations, latency, cost, and policy compliance
- Agent reputation scoring with a weighted formula (QA pass rate, iteration efficiency, latency, cost, reliability)
Architecture: 5 layers with strict boundaries — frontend (visualization only), API gateway (auth/RBAC), orchestration engine (24 modules), agent runtime (role-based, no direct tool access), MCP gateway (the only path to tools).
Stack: React + TypeScript, FastAPI, SQLite WAL, pluggable LLM providers (OpenAI, Anthropic, Azure), MCP protocol.
Configurable: Different team presets (software, marketing, custom), operating models with different governance rules, pluggable LLM backends, reusable skills, and MCP-backed integrations.
please guys, I would love to get your feedback on this and tell me if this is interesting for you to use
[link] [comments]



