Federation over Text: Insight Sharing for Multi-Agent Reasoning
arXiv cs.LG / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Federation over Text (FoT), a federated-learning-like approach for LLM agents that transfers useful “metacognitive insights” across agents and tasks.
- Instead of federating model gradients or relying on supervision, FoT aggregates agents’ semantic reasoning traces via a central server to build a reusable, cross-task insight library.
- Agents independently perform local thinking and self-improvement on their own tasks, then share reasoning traces that are iteratively distilled into a shared library for others to use.
- Experiments indicate FoT improves downstream reasoning effectiveness and efficiency, including a reported 24% gain in average downstream accuracy and a 28% reduction in reasoning tokens across early application sets.
- In an ML research insight discovery setting, FoT-produced insights reportedly cover over 90% of major contributions found in subsequent papers, suggesting strong reuse of learned reasoning strategies.
Related Articles

Every time a new model comes out, the old one is obsolete of course
Reddit r/LocalLLaMA

We built it during the NVIDIA DGX Spark Full-Stack AI Hackathon — and it ended up winning 1st place overall 🏆
Dev.to

Stop Losing Progress: Setting Up a Pro Jupyter Workflow in VS Code (No More Colab Timeouts!)
Dev.to

Building AgentOS: Why I’m Building the AWS Lambda for Insurance Claims
Dev.to

Where we are. In a year, everything has changed. Kimi - Minimax - Qwen - Gemma - GLM
Reddit r/LocalLLaMA