CausalRM: Causal-Theoretic Reward Modeling for RLHF from Observational User Feedbacks
arXiv cs.LG / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes observational reward modeling to learn reward models from user interactions like clicks, copies, and upvotes, as a scalable alternative to traditional expert annotations.
- It identifies two main challenges: annotation noise causing deviation from true user preference, and bias from users who only provide feedback on strongly felt responses.
- CausalRM introduces a noise-aware surrogate loss that is provably equivalent to the primal loss in noise-free conditions by explicitly modeling how annotation errors occur, and uses propensity scores to reweight training samples to remove user-preference bias.
- Experiments across diverse LLM backbones and benchmarks show substantial gains, including 49.2% on WildGuardMix and 32.7% on HarmBench, and code is available on the project website.
Related Articles
I Was Wrong About AI Coding Assistants. Here's What Changed My Mind (and What I Built About It).
Dev.to

Interesting loop
Reddit r/LocalLLaMA
Qwen3.5-122B-A10B Uncensored (Aggressive) — GGUF Release + new K_P Quants
Reddit r/LocalLLaMA
A supervisor or "manager" Al agent is the wrong way to control Al
Reddit r/artificial
FeatherOps: Fast fp8 matmul on RDNA3 without native fp8
Reddit r/LocalLLaMA