Decomposing the Delta: What Do Models Actually Learn from Preference Pairs?

arXiv cs.AI / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies why preference-optimization methods like DPO and KTO improve reasoning, focusing on which properties of preference pairs drive downstream gains.
  • It decomposes “quality delta” into two components: generator-level delta (differences between the models generating chosen vs. rejected traces) and sample-level delta (how large the judged quality difference is within a given preference pair).
  • Experiments vary the preference generator’s scale and family to show that larger generator-level delta reliably boosts out-of-domain reasoning performance.
  • For sample-level delta, the authors use an LLM-as-a-judge to rate traces across multiple reasoning-quality dimensions and find that filtering/selecting by sample-level delta can make training more data-efficient.
  • The authors conclude with a two-part recipe for better reasoning alignment: maximize generator-level delta during preference construction and use sample-level delta to pick the most informative training examples.

Abstract

Preference optimization methods such as DPO and KTO are widely used for aligning language models, yet little is understood about what properties of preference data drive downstream reasoning gains. We ask: what aspects of a preference pair improve a reasoning model's performance on general reasoning tasks? We investigate two distinct notions of quality delta in preference data: generator-level delta, arising from the differences in capability between models that generate chosen and rejected reasoning traces, and sample-level delta, arising from differences in judged quality differences within an individual preference pair. To study generator-level delta, we vary the generator's scale and model family, and to study sample-level delta, we employ an LLM-as-a-judge to rate the quality of generated traces along multiple reasoning-quality dimensions. We find that increasing generator-level delta steadily improves performance on out-of-domain reasoning tasks and filtering data by sample-level delta can enable more data-efficient training. Our results suggest a twofold recipe for improving reasoning performance through preference optimization: maximize generator-level delta when constructing preference pairs and exploit sample-level delta to select the most informative training examples.