ImplicitRM: Unbiased Reward Modeling from Implicit Preference Data for LLM alignment

arXiv cs.CL / 3/25/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ImplicitRM, a method for learning reward models for LLM alignment using implicit human feedback such as clicks and copies rather than costly explicit preference labels.
  • It identifies two core problems with implicit preference data: the absence of clear negative samples and systematic user preference bias that changes how easily different responses trigger feedback.
  • ImplicitRM addresses these issues by splitting training data into four latent groups using a stratification model and then optimizing a likelihood-based objective.
  • The authors claim a theoretical guarantee that the resulting learning objective is unbiased, improving the ability to distinguish true negatives from bias-induced signals.
  • Experiments reportedly show that ImplicitRM can learn accurate reward models across multiple implicit preference datasets, and the authors provide code.

Abstract

Reward modeling represents a long-standing challenge in reinforcement learning from human feedback (RLHF) for aligning language models. Current reward modeling is heavily contingent upon experimental feedback data with high collection costs. In this work, we study \textit{implicit reward modeling} -- learning reward models from implicit human feedback (e.g., clicks and copies) -- as a cost-effective alternative. We identify two fundamental challenges in implicit reward modeling: (1) Implicit preference data lacks definitive negative samples, which makes standard positive-negative classification methods inapplicable; (2) Implicit preference data suffers from user preference bias, where different responses have different propensities to elicit user feedback actions, which exacerbates the difficulty of distinguishing definitive negative samples. To address these challenges, we propose ImplicitRM, which aims to learn unbiased reward models from implicit preference data. ImplicitRM stratifies training samples into four latent groups via a stratification model. Building on this, it derives a learning objective through likelihood maximization, which we prove is theoretically unbiased, effectively resolving both challenges. Experiments demonstrate that ImplicitRM learns accurate reward models across implicit preference datasets. Code is available on our project website.