RMGAP: Benchmarking the Generalization of Reward Models across Diverse Preferences

arXiv cs.CL / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces RMGAP, a new benchmark designed to test whether reward models (RMs) generalize across diverse user preferences rather than only universal ones.
  • RMGAP covers 1,097 instances across Chat, Writing, Reasoning, and Safety domains, generating multiple candidate responses per prompt with varied linguistic profiles and then creating preference-specific prompts.
  • To capture real-world variability in how preferences are expressed, the benchmark adds paraphrased prompt variants, increasing coverage of different phrasings for the same underlying preference.
  • An evaluation of 24 state-of-the-art reward models finds significant shortcomings: the best model reaches only 49.27% Best-of-N accuracy, indicating limited generalization.
  • The authors release the related data and code publicly at the provided GitHub repository to support further research on reward model generalization.

Abstract

Reinforcement Learning from Human Feedback has become the standard paradigm for language model alignment, where reward models directly determine alignment effectiveness. In this work, we focus on how to evaluate the generalizability of reward models. By "generalizability", we mean the ability of RMs to correctly rank responses to align with diverse user preferences. However, existing reward model benchmarks are typically designed around a universal preference, failing to assess this generalization. To address this critical gap, we introduce RMGAP, a benchmark comprising 1,097 instances across Chat, Writing, Reasoning, and Safety domains. Since different users exhibit diverse preferences for the same task, we first generate four distinct responses with different linguistic profiles for each collected prompt. However, the original prompt set lacks the specificity to convey different preferences. We therefore construct tailored prompts by contrasting these candidates and designing scenarios in which one response becomes the uniquely appropriate choice. Moreover, we observe that users often express the same preference using different phrasings, and thus extend each prompt with two paraphrased variants. Our evaluation of 24 state-of-the-art RMs reveals their substantial limitations: even the best RM achieves only 49.27% Best-of-N accuracy, highlighting considerable room for improvement in reward model generalization. Related data and code are available at https://github.com/nanzhi84/RMGAP.