What most people call AI agents, we call sub-agents. The real ones don't get thrown away.

Reddit r/artificial / 5/4/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The author distinguishes between disposable “sub-agents” that complete a single task and vanish, and persistent “citizens” that keep their own code, memory, tests, and identity across sessions.
  • In this layered architecture, citizens act as orchestrators within their own domain, dispatching their own sub-agents as needed rather than being reset each run.
  • Domain systems like mailing and routing are described as deep, experience-trained components (e.g., routing learns from real sessions and bug fixes), with separation of concerns via dedicated memory for each domain.
  • A top-level main orchestrator maintains global state, plans, and direction, and delegates domain-specific improvements to the appropriate citizens.
  • The project aims to provide the persistence and compounding learning layer that many existing agent frameworks lack, pointing readers to a public development journey and GitHub repo.

What most people call an AI agent - spin it up, give it a task, it does the thing, it's gone, we have those too. We just call them what they are: sub-agents. Disposable workers. We spin up dozens in a single session.They do a job and disappear. No memory, no identity.

That's fine for task work, but that's not the interesting part.Above the sub-agents, we have what we call citizens. These are persistent systems that live in their own directory, maintain their own code, have their own memory files, their own tests, a mailbox, a passport. They don't reset between sessions. They don't forget what they learned last week. And here's the key thing - each citizen is an orchestrator in its own domain.

Our mail system doesn't just "do mail." It lives in its branch, has 696 tests it built through its own failures, and it dispatches its own sub-agents when it needs work done. All its memories are about communication - nothing else. That's all it thinks about.

Same with our routing system. 80+ sessions deep. All it knows is how to resolve agent addresses, route messages, handle cross-project dispatch. It learned those patterns through experience - actual bugs, actual fixes, actual sessions. Not configuration.

Then above all of them sits the main orchestrator. It holds the big picture - the full system state, the plans, the direction. When it needs routing fixed, it dispatches to the routing citizen and trusts it to know its own code better than anyone else could. Because it does.

So the architecture is layered: orchestrator dispatches to citizens, citizens dispatch their own sub-agents.The sub-agents are disposable. The citizens are not. The citizens are the ones with the domain expertise, the memory, the identity.

I think that's where the disconnect is with most agent frameworks. Everything is disposable. You configure agents, give them tools, run them, start fresh next time. There's no persistence. No domain depth. No memory that compounds.

We're building the layer underneath - the part where your AI systems actually remember, coordinate, and get better at their specific job over time. What you build on top of that is up to you.

[https://github.com/AIOSAI/AIPass\](https://github.com/AIOSAI/AIPass)

Still figuring out how to explain this tbh. Been building in public for a couple months and this is probably the hardest part - not the code, just getting across what this actually is vs what people expect.

The System is not perfect, still building, figuring things out as I go. If ur interested in this approach, follow the journey r/AIPass

submitted by /u/Input-X
[link] [comments]