CRISP: Compressing Redundancy in Chain-of-Thought via Intrinsic Saliency Pruning

arXiv cs.CL / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces CRISP, a framework that compresses long chain-of-thought (CoT) reasoning by leveraging the model’s intrinsic saliency rather than external compressors.
  • It finds that a reasoning termination token (denoted in the paper as [object Object]) serves as an “information anchor,” with attention patterns that separate essential reasoning from redundant parts.
  • CRISP uses these internal attention signals to guide fine-grained (“atomic”) compression operations, aiming to keep logical coherence while increasing information density.
  • Experiments across multiple backbone models and mathematical datasets show CRISP reduces token counts by 50–60% without sacrificing accuracy, addressing computational overhead and latency in long-context reasoning.
  • The authors open-source their implementation to support further research on efficient reasoning methods.

Abstract

Long Chain-of-Thought (CoT) reasoning is pivotal for the success of recent reasoning models but suffers from high computational overhead and latency. While prior works attempt to compress CoT via external compressor, they often fail to align with the model's internal reasoning dynamics, resulting in the loss of critical logical steps. This paper presents \textbf{C}ompressing \textbf{R}edundancy in Chain-of-Thought via \textbf{I}ntrinsic \textbf{S}aliency \textbf{P}runing (\textbf{CRISP}), a framework that compresses CoT by exploiting the model's intrinsic saliency. Our analysis reveals a distinct phenomenon: the reasoning termination token \texttt{[object Object]} acts as an information anchor, where its attention pattern effectively demarcates essential reasoning from redundancy. Based on this finding, we design a policy that utilizes these intrinsic attention signals to guide atomic compression operations. In contrast to coarse-grained pruning strategies, CRISP strategically distills the reasoning chain to maximize information density while preserving logical coherence. Empirical results across various backbone models and mathematical datasets demonstrate that CRISP achieves a 50-60% reduction in token count without compromising accuracy, effectively mitigating the efficiency bottleneck of long-context reasoning. We open-source our implementation to facilitate further research in efficient reasoning.