Federation over Text: Insight Sharing for Multi-Agent Reasoning

arXiv cs.LG / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Federation over Text (FoT), a federated-learning-like approach for LLM agents that transfers useful “metacognitive insights” across agents and tasks.
  • Instead of federating model gradients or relying on supervision, FoT aggregates agents’ semantic reasoning traces via a central server to build a reusable, cross-task insight library.
  • Agents independently perform local thinking and self-improvement on their own tasks, then share reasoning traces that are iteratively distilled into a shared library for others to use.
  • Experiments indicate FoT improves downstream reasoning effectiveness and efficiency, including a reported 24% gain in average downstream accuracy and a 28% reduction in reasoning tokens across early application sets.
  • In an ML research insight discovery setting, FoT-produced insights reportedly cover over 90% of major contributions found in subsequent papers, suggesting strong reuse of learned reasoning strategies.

Abstract

LLM-powered agents often reason from scratch when presented with a new problem instance and lack automatic mechanisms to transfer learned skills to other agents. We propose a federated learning-like framework, Federation over Text (FoT), that enables multiple agents solving different tasks to collectively generate a shared library of metacognitive insights by iteratively federating their local reasoning processes. Instead of federation over gradients (e.g., as in distributed training), FoT operates at the semantic level without any gradient optimization or supervision signal. Iteratively, each agent does local thinking and self-improvement on their specific tasks independently, and shares reasoning traces with a central server, which aggregates and distills them into a cross-task (and cross-domain) insight library that existing and future agents can leverage to improve performance on related tasks. Experiments show that FoT improves reasoning effectiveness and efficiency across a wide range of challenging applications, including mathematical problem solving, cross-domain collaboration, and machine learning research insight discovery. Specifically, it improves average accuracies of downstream tasks by 24% while reducing the reasoning tokens by 28% across the first two applications. In the research insight discovery application, FoT is able to generate insights that cover over 90% of the major contributions in the subsequent papers.