Reward Modeling from Natural Language Human Feedback

arXiv cs.CL / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that reward modeling trained on binary preference labels can cause generative reward models to “game” the labels using superficial or unjustified critiques, which injects significant noise into the reinforcement-learning reward signal.
  • It proposes RM-NLHF (Reward Modeling from Natural Language Human Feedback), which uses similarity between model-generated and human natural-language critiques to produce richer process-based reward signals.
  • To reduce reliance on large-scale human critique data, the authors introduce Meta Reward Model (MetaRM), which learns to predict process rewards from critique-containing datasets and then generalizes to data without human critiques.
  • Experiments across multiple benchmarks show that RM-NLHF (and the MetaRM approach) consistently outperforms state-of-the-art GRMs trained using outcome-only reward supervision.
  • Overall, the work supports the idea that integrating natural-language feedback improves reward modeling quality compared with supervision limited to binary outcomes.

Abstract

Reinforcement Learning with Verifiable reward (RLVR) on preference data has become the mainstream approach for training Generative Reward Models (GRMs). Typically in pairwise rewarding tasks, GRMs generate reasoning chains ending with critiques and preference labels, and RLVR then relies on the correctness of the preference labels as the training reward. However, in this paper, we demonstrate that such binary classification tasks make GRMs susceptible to guessing correct outcomes without sound critiques. Consequently, these spurious successes introduce substantial noise into the reward signal, thereby impairing the effectiveness of reinforcement learning. To address this issue, we propose Reward Modeling from Natural Language Human Feedback (RM-NLHF), which leverages natural language feedback to obtain process reward signals, thereby mitigating the problem of limited solution space inherent in binary tasks. Specifically, we compute the similarity between GRM-generated and human critiques as the training reward, which provides more accurate reward signals than outcome-only supervision. Additionally, considering that human critiques are difficult to scale up, we introduce Meta Reward Model (MetaRM) which learns to predict process reward from datasets with human critiques and then generalizes to data without human critiques. Experiments on multiple benchmarks demonstrate that our method consistently outperforms state-of-the-art GRMs trained with outcome-only reward, confirming the superiority of integrating natural language over binary human feedback as supervision.