Knowing When to Quit: A Principled Framework for Dynamic Abstention in LLM Reasoning

arXiv stat.ML / 4/21/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses compute waste in LLM chain-of-thought by improving “abstention,” i.e., choosing to withhold outputs likely to be incorrect.
  • It focuses on dynamic mid-generation abstention, where the model can terminate an unpromising reasoning trace at each token position rather than only before or after generation.
  • The authors provide a formal, principled analysis by modeling abstention as an explicit action in a regularized reinforcement learning (RL) framework.
  • The work shows that abstaining when the estimated value function drops below a reward threshold provably outperforms common baseline strategies under general conditions.
  • Experiments on mathematical reasoning and toxicity-avoidance tasks support the theory, showing higher selective accuracy via an efficient approximation of the value function.

Abstract

Large language models (LLMs) using chain-of-thought reasoning often waste substantial compute by producing long, incorrect responses. Abstention can mitigate this by withholding outputs unlikely to be correct. While most abstention methods decide to withhold outputs before or after generation, dynamic mid-generation abstention considers early termination of unpromising reasoning traces at each token position. Prior work has explored empirical variants of this idea, but principled guidance for the abstention rule remains lacking. We present a formal analysis of dynamic abstention for LLMs, modeling abstention as an explicit action within a regularized reinforcement learning framework. An abstention reward parameter controls the trade-off between compute and information. We show that abstaining when the value function falls below this reward strictly outperforms natural baselines under general conditions. We further derive a principled and efficient method to approximate the value function. Empirical results on mathematical reasoning and toxicity avoidance tasks support our theory and demonstrate improved selective accuracy over existing methods.