Reason Only When Needed: Efficient Generative Reward Modeling via Model-Internal Uncertainty

arXiv cs.CL / 4/14/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes E-GRM, a generative reward modeling framework that improves LLM reasoning by applying Chain-of-Thought only when it is actually needed, rather than for every input.
  • E-GRM estimates uncertainty using the convergence behavior of parallel generations from the model, enabling selective reasoning without handcrafted or task-specific triggers.
  • To address limitations of coarse voting-based evaluation, the approach adds a lightweight discriminative scorer trained with a hybrid regression–ranking objective for more fine-grained reward assessment.
  • Experiments on multiple reasoning benchmarks report substantially lower inference cost alongside consistent accuracy gains, suggesting model-internal uncertainty is a general signal for efficient reasoning-aware reward modeling.

Abstract

Recent advancements in the Generative Reward Model (GRM) have demonstrated its potential to enhance the reasoning abilities of LLMs through Chain-of-Thought (CoT) prompting. Despite these gains, existing implementations of GRM suffer from two critical limitations. First, CoT prompting is applied indiscriminately to all inputs regardless of their inherent complexity. This introduces unnecessary computational costs for tasks amenable to fast, direct inference. Second, existing approaches primarily rely on voting-based mechanisms to evaluate CoT outputs, which often lack granularity and precision in assessing reasoning quality. In this paper, we propose E-GRM, an efficient generative reward modeling framework grounded in model-internal uncertainty. E-GRM leverages the convergence behavior of parallel model generations to estimate uncertainty and selectively trigger CoT reasoning only when needed, without relying on handcrafted features or task-dependent signals. To improve reward fidelity, we introduce a lightweight discriminative scorer trained with a hybrid regression--ranking objective to provide fine-grained evaluation of reasoning paths. Experiments on multiple reasoning benchmarks show that E-GRM substantially reduces inference cost while consistently improving answer accuracy, demonstrating that model-internal uncertainty is an effective and general signal for efficient reasoning-aware reward modeling.