Week 6 AIPass update - answering the top questions from last post (file conflicts, remote models, scale)

Reddit r/artificial / 4/16/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The Week 6 AIPass update addresses common multi-agent concerns, explaining that shared-file write conflicts rarely occur because the framework enforces planning and file/phase assignment.
  • It details several mechanisms to prevent agent collisions, including queueing agents instead of spawning duplicates, using a git/PR workflow with orchestrator-only merges, and maintaining structured state via JSON instead of markdown.
  • The article clarifies that AIPass can run with remote LLMs while keeping orchestration, memory, files, and messaging local, invoking existing Claude/Codex/Gemini CLIs as subprocesses to avoid extra API-key handling.
  • For scale, the author reports running 30 agents concurrently and nested sub-agent setups (3×40) at high CPU usage, concluding that compute—not the framework—is the main limiting factor.
  • The update also lists shipped improvements this week, including a new watchdog module with multiple handlers and extensive tests, fixes to a git PR lock file leak, and various quality-check improvements.

Followup to last post with answers to the top questions from the comments. Appreciate everyone who jumped in.

The most common one by a mile was "what happens when two agents write to the same file at the same time?" Fair

question, it's the first thing everyone asks about a shared-filesystem setup. Honest answer: almost never happens,

because the framework makes it hard to happen.

Four things keep it clean:

  1. Planning first. Every multi-agent task runs through a flow plan template before any file gets touched. The plan

    assigns files and phases so agents don't collide by default. Templates here if you're curious:

    github.com/AIOSAI/AIPass/tree/main/src/aipass/flow/templates

  2. Dispatch blockers. An agent can't exist in two places at once. If five senders email the same agent about the

    same thing, it queues them, doesn't spawn five copies. No "5 agents fixing the same bug" nightmares.

  3. Git flow. Agents don't merge their own work. They build features on main locally, submit a PR, and only the

    orchestrator merges. When an agent is writing a PR it sets a repo-wide git block until it's done.

  4. JSON over markdown for state files. Markdown let agents drift into their own formats over time. JSON holds

    structure. You can run `cat .trinity/local.json` and see exactly what an agent thinks at any time.

    Second common question: "doesn't a local framework with a remote model defeat the point?" Local means the

    orchestration is local - agents, memory, files, messaging all on your machine. The model is the brain you plug in.

    And you don't need API keys - AIPass runs on your existing Claude Pro/Max, Codex, or Gemini CLI subscription by

    invoking each CLI as an official subprocess. No token extraction, no proxying, nothing sketchy. Or point it at a

    local model. Or mix all of them. You're not locked to one vendor and you're not paying for API credits on top of a

    sub you already have.

    On scale: I've run 30 agents at once without a crash, and 3 agents each with 40 sub-agents at around 80% CPU with

    occasional spikes. Compute is the bottleneck, not the framework. I'd love to test 1000 but my machine would cry

    before I got there. If someone wants to try it, please tell me what broke.

    Shipped this week: new watchdog module (5 handlers, 100+ tests) for event automation, fixed a git PR lock file leak

    that was leaking into commits, plus a bunch of quality-checker fixes.

    About 6 weeks in. Solo dev, every PR is human+AI collab.

    pip install aipass

    https://github.com/AIOSAI/AIPass

    Keep the questions coming, that's what got this post written.

submitted by /u/Input-X
[link] [comments]