When Is Thinking Enough? Early Exit via Sufficiency Assessment for Efficient Reasoning

arXiv cs.CL / 4/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses inefficient “overthinking” in large reasoning models by enabling early termination of chain-of-thought once the model determines it has enough evidence to answer correctly.
  • It proposes Dynamic Thought Sufficiency in Reasoning (DTSR), a two-stage framework that monitors reflection signals and then performs a thought sufficiency check to choose an early-exit point.
  • Experiments on Qwen3 models show that DTSR cuts reasoning length by about 28.9%–34.9% while incurring only minimal performance loss, improving computational efficiency.
  • The authors also analyze issues like overconfidence in large reasoning models and how self-evaluation paradigms can affect the reliability of early-exit decisions.

Abstract

Large reasoning models (LRMs) have achieved remarkable performance in complex reasoning tasks, driven by their powerful inference-time scaling capability. However, LRMs often suffer from overthinking, which results in substantial computational redundancy and significantly reduces efficiency. Early-exit methods aim to mitigate this issue by terminating reasoning once sufficient evidence has been generated, yet existing approaches mostly rely on handcrafted or empirical indicators that are unreliable and impractical. In this work, we introduce Dynamic Thought Sufficiency in Reasoning (DTSR), a novel framework for efficient reasoning that enables the model to dynamically assess the sufficiency of its chain-of-thought (CoT) and determine the optimal point for early exit. Inspired by human metacognition, DTSR operates in two stages: (1) Reflection Signal Monitoring, which identifies reflection signals as potential cues for early exit, and (2) Thought Sufficiency Check, which evaluates whether the current CoT is sufficient to derive the final answer. Experimental results on the Qwen3 models show that DTSR reduces reasoning length by 28.9%-34.9% with minimal performance loss, effectively mitigating overthinking. We further discuss overconfidence in LRMs and self-evaluation paradigms, providing valuable insights for early-exit reasoning.