AI Navigate

Chow-Liu Ordering for Long-Context Reasoning in Chain-of-Agents

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • Chain-of-Agents (CoA) frameworks decompose long-context queries into chunks processed sequentially by LLM-based agents with bounded shared memory, enabling multi-agent long-context reasoning.
  • The order in which these chunks are processed significantly impacts information retention due to the lossy information bottleneck introduced by bounded memory.
  • The authors propose using Chow-Liu trees to learn dependency structures among chunks, prioritizing strongly related chunks for processing.
  • Empirical results demonstrate that breadth-first traversal of the Chow-Liu tree yields better chunk orderings, reducing information loss and improving answer relevance and accuracy on three long-context benchmarks.
  • This approach outperforms default chunk ordering and semantic score-based ordering methods, providing a more effective strategy for long-context multi-agent reasoning.

Computer Science > Computation and Language

arXiv:2603.09835 (cs)
[Submitted on 10 Mar 2026]

Title:Chow-Liu Ordering for Long-Context Reasoning in Chain-of-Agents

View a PDF of the paper titled Chow-Liu Ordering for Long-Context Reasoning in Chain-of-Agents, by Naman Gupta and 10 other authors
View PDF HTML (experimental)
Abstract:Sequential multi-agent reasoning frameworks such as Chain-of-Agents (CoA) handle long-context queries by decomposing inputs into chunks and processing them sequentially using LLM-based worker agents that read from and update a bounded shared memory. From a probabilistic perspective, CoA aims to approximate the conditional distribution corresponding to a model capable of jointly reasoning over the entire long context. CoA achieves this through a latent-state factorization in which only bounded summaries of previously processed evidence are passed between agents. The resulting bounded-memory approximation introduces a lossy information bottleneck, making the final evidence state inherently dependent on the order in which chunks are processed.
In this work, we study the problem of chunk ordering for long-context reasoning. We use the well-known Chow-Liu trees to learn a dependency structure that prioritizes strongly related chunks. Empirically, we show that a breadth-first traversal of the resulting tree yields chunk orderings that reduce information loss across agents and consistently outperform both default document-chunk ordering and semantic score-based ordering in answer relevance and exact-match accuracy across three long-context benchmarks.
Comments:
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2603.09835 [cs.CL]
  (or arXiv:2603.09835v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2603.09835
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Vaibhav Singh [view email]
[v1] Tue, 10 Mar 2026 15:57:35 UTC (188 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.