Reason in Chains, Learn in Trees: Self-Rectification and Grafting for Multi-turn Agent Policy Optimization

arXiv cs.AI / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multi-turn RL for LLM agents is limited by sparse and poorly credited rewards when training treats sampled trajectories as independent “chains.”
  • It introduces T-STAR (Tree-structured Self-Taught Agent Rectification), which merges correlated steps across trajectories into a unified “Cognitive Tree” to recover latent reward structure.
  • An Introspective Valuation mechanism propagates trajectory-level rewards back through the tree to compute variance-reduced, step-level relative advantage for more effective optimization.
  • Using the Cognitive Tree, it proposes In-Context Thought Grafting to generate corrective reasoning by contrasting successful vs. failed branches at divergence points.
  • Experiments on embodied, interactive, reasoning, and planning benchmarks show consistent improvements over strong baselines, especially for tasks requiring long reasoning chains.

Abstract

Reinforcement learning for Large Language Model agents is often hindered by sparse rewards in multi-step reasoning tasks. Existing approaches like Group Relative Policy Optimization treat sampled trajectories as independent chains, assigning uniform credit to all steps in each chain and ignoring the existence of critical steps that may disproportionally impact reasoning outcome. In this paper, we propose T-STAR(Tree-structured Self-Taught Agent Rectification), a framework that recovers the latent correlated reward structure across seemingly independent trajectories. Specifically, we consolidate trajectories into a unified Cognitive Tree by identifying and merging functionally similar steps/nodes. It enables an Introspective Valuation mechanism that back-propagates trajectory-level rewards through the tree to obtain a new notion of variance-reduced relative advantage at step-level. Using the Cognitive Tree, we also develop In-Context Thought Grafting to synthesize corrective reasoning by contrasting successful and failed branches at critical divergence points/steps. Our proposed Surgical Policy Optimization then capitalizes on the rich policy gradient information concentrated at these critical points/steps through a Bradley-Terry type of surgical loss. Extensive experiments across embodied, interactive, reasoning, and planning benchmarks demonstrate that T-STAR achieves consistent improvements over strong baselines, with gains most pronounced on tasks requiring extended reasoning chains.