AI Navigate

CausalRM: Causal-Theoretic Reward Modeling for RLHF from Observational User Feedbacks

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes observational reward modeling to learn reward models from user interactions like clicks, copies, and upvotes, as a scalable alternative to traditional expert annotations.
  • It identifies two main challenges: annotation noise causing deviation from true user preference, and bias from users who only provide feedback on strongly felt responses.
  • CausalRM introduces a noise-aware surrogate loss that is provably equivalent to the primal loss in noise-free conditions by explicitly modeling how annotation errors occur, and uses propensity scores to reweight training samples to remove user-preference bias.
  • Experiments across diverse LLM backbones and benchmarks show substantial gains, including 49.2% on WildGuardMix and 32.7% on HarmBench, and code is available on the project website.

Abstract

Despite the success of reinforcement learning from human feedback (RLHF) in aligning language models, current reward modeling heavily relies on experimental feedback data collected from human annotators under controlled and costly conditions. In this work, we introduce observational reward modeling -- learning reward models with observational user feedback (e.g., clicks, copies, and upvotes) -- as a scalable and cost-effective alternative. We identify two fundamental challenges in this setting: (1) observational feedback is noisy due to annotation errors, which deviates it from true user preference; (2) observational feedback is biased by user preference, where users preferentially provide feedback on responses they feel strongly about, which creats a distribution shift between training and inference data. To address these challenges, we propose CausalRM, a causal-theoretic reward modeling framework that aims to learn unbiased reward models from observational feedback. To tackle challenge (1), CausalRM introduces a noise-aware surrogate loss term that is provably equivalent to the primal loss under noise-free conditions by explicitly modeling the annotation error generation process. To tackle challenge (2), CausalRM uses propensity scores -- the probability of a user providing feedback for a given response -- to reweight training samples, yielding a loss function that eliminates user preference bias. Extensive experiments across diverse LLM backbones and benchmark datasets validate that CausalRM effectively learns accurate reward signals from noisy and biased observational feedback and delivers substantial performance improvements on downstream RLHF tasks -- including a 49.2% gain on WildGuardMix and a 32.7% improvement on HarmBench. Code is available on our project website.