DiffAdapt: Difficulty-Adaptive Reasoning for Token-Efficient LLM Inference

arXiv cs.CL / 4/29/2026

📰 NewsModels & Research

Key Points

  • The paper analyzes reasoning traces in large language models and finds a consistent U-shaped pattern in token-probability entropy across easy, medium, and hard problems.
  • It observes an “overthinking” tendency on easy instances, supported by a reported 22–25% entropy reduction when moving from easy to medium difficulty.
  • It introduces DiffAdapt, a lightweight framework that picks an Easy/Normal/Hard inference strategy per question using estimated difficulty and reasoning-trace entropy.
  • DiffAdapt uses fixed prompt/temperature/max-token settings per strategy and does not fine-tune the base LLM; instead, it trains a small probe to classify from the model’s final hidden state.
  • Evaluations across five models and eight benchmarks show comparable or improved accuracy with token reductions of up to 22.4%, indicating more compute-efficient reasoning.

Abstract

Recent reasoning Large Language Models (LLMs) demonstrate remarkable problem-solving abilities but often generate long thinking traces whose utility is unclear. Our work aims to improve their efficiency, enabling them to reach high performance without overthinking. First, we analyze the entropy of token probabilities in reasoning traces. Across three models, we observe a consistent U-shaped entropy pattern: high entropy on easy problems despite high accuracy, low entropy on problems with medium difficulty, and high entropy on hard problems reflecting uncertainty. Specifically, we notice 22--25\% entropy reduction from easy to medium difficulty regions, suggesting an {overthinking} phenomenon on easy instances. Building on these insights, we introduce \textbf{DiffAdapt}, a lightweight framework that selects Easy/Normal/Hard inference strategies per question based on their difficulty and reasoning trace entropy. Each inference strategy consists of a fixed prompt, temperature and maximum token length. In contrast to existing efficiency optimization methods, our approach does not fine-tune base LLM but a small probe that classifies LLM's final hidden state, allowing inexpensive adaptation. We comprehensively evaluate our method on five models and eight benchmarks. Our method achieves comparable or improved accuracy while reducing token usage by up to 22.4\%, establishing a practical path toward compute-efficient reasoning.