AI Navigate

Efficient Reasoning with Balanced Thinking

arXiv cs.AI / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies overthinking and underthinking as bottlenecks for large reasoning models, limiting efficiency and accuracy in resource-constrained settings.
  • It proposes ReBalance, a training-free framework that uses confidence dynamics to detect problematic reasoning and constructs a steering vector from reasoning mode prototypes to guide progress.
  • A dynamic control function modulates the steering vector in real time to prune redundant steps during overthinking and encourage exploration during underthinking, improving robustness.
  • Extensive experiments show ReBalance works across models from 0.5B to 32B and nine benchmarks in math, general QA, and coding tasks, with reduced output redundancy and improved accuracy.
  • The method is plug-and-play for deployment, and code is available at the linked GitHub repository.

Abstract

Large Reasoning Models (LRMs) have shown remarkable reasoning capabilities, yet they often suffer from overthinking, expending redundant computational steps on simple problems, or underthinking, failing to explore sufficient reasoning paths despite inherent capabilities. These issues lead to inefficiencies and potential inaccuracies, limiting practical deployment in resource-constrained settings. Existing methods to mitigate overthinking, such as suppressing reflective keywords or adjusting reasoning length, may inadvertently induce underthinking, compromising accuracy. Therefore, we propose ReBalance, a training-free framework that achieves efficient reasoning with balanced thinking. ReBalance leverages confidence as a continuous indicator of reasoning dynamics, identifying overthinking through high confidence variance and underthinking via consistent overconfidence. By aggregating hidden states from a small-scale dataset into reasoning mode prototypes, we compute a steering vector to guide LRMs' reasoning trajectories. A dynamic control function modulates this vector's strength and direction based on real-time confidence, pruning redundancy during overthinking, and promoting exploration during underthinking. Extensive experiments conducted on four models ranging from 0.5B to 32B, and across nine benchmarks in math reasoning, general question answering, and coding tasks demonstrate that ReBalance effectively reduces output redundancy while improving accuracy, offering a general, training-free, and plug-and-play strategy for efficient and robust LRM deployment. Code is available at https://github.com/yu-lin-li/ReBalance .