Think Less, Know More: State-Aware Reasoning Compression with Knowledge Guidance for Efficient Reasoning

arXiv cs.CL / 4/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces STACK, a framework for step-wise chain-of-thought (CoT) compression that reduces unnecessary “overthinking” in Large Reasoning Models while improving inference efficiency.
  • STACK models stage-specific redundancy sources and uses retrieval-augmented knowledge guidance, switching between knowledge-guided compression for uncertain/biased states and self-prompted compression for overly long but confident states.
  • It adds an answer-convergence-based early stopping mechanism to curb redundant verification during reasoning.
  • The authors propose a reward-difference-driven training approach combining PPO and DPO so the model can learn state-conditioned compression policies.
  • Experiments on three mathematical reasoning benchmarks report a ~59.9% reduction in average response length alongside a ~4.8-point accuracy improvement over prior compression methods.

Abstract

Large Reasoning Models (LRMs) achieve strong performance on complex tasks by leveraging long Chain-of-Thought (CoT), but often suffer from overthinking, leading to excessive reasoning steps and high inference latency. Existing CoT compression methods struggle to balance accuracy and efficiency, and lack fine-grained, step-level adaptation to redundancy and reasoning bias. Therefore, we propose State-Aware Reasoning Compression with Knowledge Guidance (STACK), a framework that performs step-wise CoT compression by explicitly modeling stage-specific redundancy sources and integrating with a retrieval-augmented guidance. STACK constructs online long-short contrastive samples and dynamically switches between knowledge-guided compression for uncertain or biased reasoning state and self-prompted compression for overly long but confident state, complemented by an answer-convergence-based early stopping mechanism to suppress redundant verification. We further propose a reward-difference-driven training strategy by combining Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO), enabling models to learn state-conditioned compression strategies. Experiments on three mathematical reasoning benchmarks show that STACK achieves a superior accuracy-efficiency balance, reducing average response length by 59.9% while improving accuracy by 4.8 points over existing methods.