Reasoning Graphs: Deterministic Agent Accuracy through Evidence-Centric Chain-of-Thought Feedback

arXiv cs.CL / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that agent “chain-of-thought” resets between similar queries, causing lower accuracy and high run-to-run variance because prior deliberations are discarded.
  • It introduces reasoning graphs, which persist an agent’s deliberation tied to specific retrieved evidence by storing structured, evidence-connected edges that can be traversed in later runs.
  • It contrasts this evidence-centric backward traversal with prior memory approaches that retrieve by query similarity or recency, emphasizing that feedback is tied to the currently evaluated evidence rather than the query.
  • It also proposes retrieval graphs to iteratively tighten the candidate set via a pipeline planner, and claims that the combined graphs form a self-improving loop that boosts accuracy while collapsing variance without retraining.
  • The authors formalize the structures and traversal algorithms and outline an evaluation protocol (sequential cluster evaluation) to measure accuracy convergence on multi-hop question answering benchmarks.

Abstract

Language model agents reason from scratch on every query: each time an agent retrieves evidence and deliberates, the chain of thought is discarded and the next similar query starts with no prior insight. This produces lower accuracy and high variance, as the same type of query can succeed or fail unpredictably. We introduce reasoning graphs, a graph structure that persists an agent's per-evidence chain of thought as structured edges connected to the evidence items they evaluate. Unlike prior memory mechanisms that store distilled strategies as flat records indexed by query similarity or appended by recency, reasoning graphs enable evidence-centric feedback: given a new candidate set, the system traverses all incoming evaluation edges for each evidence item across all prior runs, surfacing how that specific item has been judged before. This backward traversal from evidence inward is a structurally different capability from query-similarity retrieval, because the feedback is tied to the specific evidence the agent is currently examining, not to the query. We further introduce retrieval graphs, a complementary structure that feeds a pipeline planner to tighten the candidate funnel over successive runs. Together, both graphs form a self-improving feedback loop: accuracy rises and variance collapses over successive runs, with every decision fully traceable through the graph. This improvement requires no retraining; the base model remains frozen and all gains come from context engineering via graph traversal. We formalize the graph structure, traversal algorithms, and feedback mechanisms, and describe a sequential cluster evaluation protocol for measuring accuracy convergence and variance collapse on multi-hop question answering benchmarks.