Bringing Value Models Back: Generative Critics for Value Modeling in LLM Reinforcement Learning

arXiv cs.LG / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper revisits RL credit assignment in LLM reinforcement learning and argues that conventional discriminative (one-shot scalar) value critics are difficult to train reliably due to limited expressiveness under the one-shot prediction paradigm.
  • It cites representation complexity theory and scaling experiments showing that these critics do not improve reliably with increased scale.
  • To address this, the authors propose Generative Actor-Critic (GenAC), replacing one-shot value prediction with a generative critic that performs chain-of-thought reasoning before outputting a value estimate.
  • They add In-Context Conditioning to keep the critic calibrated to the current actor during training, improving both value approximation quality and robustness.
  • Experiments indicate GenAC improves ranking reliability, out-of-distribution generalization, and yields stronger downstream RL performance than value-based and value-free baselines.

Abstract

Credit assignment is a central challenge in reinforcement learning (RL). Classical actor-critic methods address this challenge through fine-grained advantage estimation based on a learned value function. However, learned value models are often avoided in modern large language model (LLM) RL because conventional discriminative critics are difficult to train reliably. We revisit value modeling and argue that this difficulty is partly due to limited expressiveness. In particular, representation complexity theory suggests that value functions can be hard to approximate under the one-shot prediction paradigm used by existing value models, and our scaling experiments show that such critics do not improve reliably with scale. Motivated by this observation, we propose Generative Actor-Critic (GenAC), which replaces one-shot scalar value prediction with a generative critic that performs chain-of-thought reasoning before producing a value estimate. We further introduce In-Context Conditioning, which helps the critic remain calibrated to the current actor throughout training. GenAC improves value approximation, ranking reliability, and out-of-distribution generalization, and these gains translate into stronger downstream RL performance than both value-based and value-free baselines. Overall, our results suggest that stronger value modeling is a promising direction for improving credit assignment in LLM reinforcement learning.