Forage V2: Knowledge Evolution and Transfer in Autonomous Agent Organizations
arXiv cs.AI / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Forage V2 targets “denominator blindness” in open-world autonomous agents by extending V1’s co-evolving evaluation and method isolation into a learning-organization architecture.
- The approach accumulates knowledge across multiple runs, transfers that knowledge across different model capabilities, and uses institutional safeguards to prevent degradation of stored evaluation heuristics.
- Experiments across web scraping, API queries, and mathematical reasoning show knowledge entries growing from 0 to 54 over six runs, with denominator estimates stabilizing as domain understanding improves.
- Knowledge transfer is demonstrated by seeding a weaker model (Sonnet) with a stronger model’s (Opus) knowledge, reducing a coverage gap from 6.6pp to 1.1pp, cutting cost (9.40 to 5.13 USD), and reaching convergence faster (4.5 vs. 7.0 rounds).
- V2’s key contribution is architectural: it proposes organizational “institutions” (audit separation, contract protocols, organizational memory) so future agents can inherit and rely on calibrated, readable knowledge independent of model provider.
Related Articles
I’m working on an AGI and human council system that could make the world better and keep checks and balances in place to prevent catastrophes. It could change the world. Really. Im trying to get ahead of the game before an AGI is developed by someone who only has their best interest in mind.
Reddit r/artificial
Deepseek V4 Flash and Non-Flash Out on HuggingFace
Reddit r/LocalLLaMA

DeepSeek V4 Flash & Pro Now out on API
Reddit r/LocalLLaMA

I’m building a post-SaaS app catalog on Base, and here’s what that actually means
Dev.to

From "Hello World" to "Hello Agents": The Developer Keynote That Rewired Software Engineering
Dev.to