Robust Reward Modeling for Large Language Models via Causal Decomposition

arXiv cs.CL / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a causal decomposition approach to reward modeling that reduces reliance on spurious cues like response length and overly agreeable tone.
  • It learns a decoder that maps a candidate answer to a latent intent embedding of the prompt, using reconstruction error as an additional training signal to regularize the reward model.
  • The authors provide theoretical justification that the reconstruction-error signal emphasizes prompt-dependent information while suppressing prompt-independent shortcuts.
  • Experiments across math, helpfulness, and safety benchmarks show the method improves candidate selection behavior, achieving 0.877 accuracy in selecting shorter and less sycophantic candidates.
  • Integrating this signal into reward-model training for Gemma-2-2B-it and Gemma-2-9B-it raises RewardBench accuracy from 0.832 to 0.868 and improves Best-of-N win rates while remaining robust under controlled rewrite drift tests.

Abstract

Reward models are central to aligning large language models, yet they often overfit to spurious cues such as response length and overly agreeable tone. Most prior work weakens these cues directly by penalizing or controlling specific artifacts, but it does not explicitly encourage the model to ground preferences in the prompt's intent. We learn a decoder that maps a candidate answer to the latent intent embedding of the input. The reconstruction error is used as a signal to regularize the reward model training. We provide theoretical evidence that this signal emphasizes prompt-dependent information while suppressing prompt-independent shortcuts. Across math, helpfulness, and safety benchmarks, the decoder selects shorter and less sycophantic candidates with 0.877 accuracy. Incorporating this signal into RM training in Gemma-2-2B-it and Gemma-2-9B-it increases RewardBench accuracy from 0.832 to 0.868. For Best-of-N selection, our framework increases length-controlled win rates while producing shorter outputs, and remains robust to lengthening and mild off-topic drift in controlled rewrite tests.