Graph-Based Chain-of-Thought Pruning for Reducing Redundant Reflections in Reasoning LLMs

arXiv cs.CL / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies that reinforcement-learning-based chain-of-thought (CoT) can produce “overthinking” due to inefficient reflection, mainly via indiscriminate low-impact checks and repetitive re-verification of established conclusions.
  • It proposes converting linear CoT into a directed acyclic graph (DAG) with dependency edges, enabling a dual pruning strategy that prunes weak reflection branches and removes late-stage redundant re-checks.
  • The authors train a distilled pruning policy using a three-stage pipeline: SFT on concise pruned traces, DPO to prefer correct yet less redundant trajectories, and GRPO with a length penalty to balance correctness and efficiency.
  • Experiments report a 42% reduction in average reasoning tokens while maintaining or improving accuracy, suggesting the method improves reasoning efficiency without sacrificing performance.

Abstract

Extending CoT through RL has been widely used to enhance the reasoning capabilities of LLMs. However, due to the sparsity of reward signals, it can also induce undesirable thinking patterns such as overthinking, i.e., generating redundant intermediate reasoning content. In this work, we argue that a major source of such redundancy is inefficient reflection, which often manifests in two problematic patterns: Indiscriminate Reflection, where the model performs broad, low-impact checks throughout reasoning, and Repetitive Reflection, where it repeatedly re-verifies an already established conclusion. To address this, we introduce a graph-based CoT optimization framework. Specifically, we convert each linear CoT into a directed acyclic graph (DAG) with explicit dependency edges, and design a dual pruning strategy: branch-level pruning removes weakly contributing reflection branches, while depth-level pruning eliminates late-stage re-verification. We distill this behavior via a three-stage pipeline: (1) SFT to initialize the policy on pruned concise traces, (2) DPO to prefer correct but less redundant trajectories, and (3) GRPO with length penalty to jointly optimize answer correctness and efficiency. Experiments show that our approach reduces the average reasoning tokens by 42\% while maintaining or improving accuracy.