Context-Agent: Dynamic Discourse Trees for Non-Linear Dialogue

arXiv cs.CL / 4/8/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard LLM dialogue handling—treating conversation history as a flat sequence—fails to match the hierarchical, branching nature of human discourse and can degrade coherence over long interactions.
  • It proposes Context-Agent, a framework that represents multi-turn dialogue context as a dynamic tree, allowing the system to maintain and traverse multiple topic branches as conversations shift.
  • To evaluate non-linear, long-horizon behavior, the authors introduce the NTM (Non-linear Task Multi-turn Dialogue) benchmark tailored to measure performance in branching dialogue scenarios.
  • Experiments reported in the paper show higher task completion rates and better token efficiency across multiple LLMs, suggesting that structured context management improves effectiveness in complex dialogues.
  • The authors release the dataset and code on GitHub to support replication and further development.

Abstract

Large Language Models demonstrate outstanding performance in many language tasks but still face fundamental challenges in managing the non-linear flow of human conversation. The prevalent approach of treating dialogue history as a flat, linear sequence is misaligned with the intrinsically hierarchical and branching structure of natural discourse, leading to inefficient context utilization and a loss of coherence during extended interactions involving topic shifts or instruction refinements. To address this limitation, we introduce Context-Agent, a novel framework that models multi-turn dialogue history as a dynamic tree structure. This approach mirrors the inherent non-linearity of conversation, enabling the model to maintain and navigate multiple dialogue branches corresponding to different topics. Furthermore, to facilitate robust evaluation, we introduce the Non-linear Task Multi-turn Dialogue (NTM) benchmark, specifically designed to assess model performance in long-horizon, non-linear scenarios. Our experiments demonstrate that Context-Agent enhances task completion rates and improves token efficiency across various LLMs, underscoring the value of structured context management for complex, dynamic dialogues. The dataset and code is available at GitHub.