TUR-DPO: Topology- and Uncertainty-Aware Direct Preference Optimization

arXiv cs.AI / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard DPO can be brittle because it treats human preference signals as flat winner-vs-loser labels and is sensitive to noisy or fragile “chains of thought.”
  • It proposes TUR-DPO, which extends DPO by using lightweight reasoning “topologies” and combining semantic faithfulness, usefulness, and topology quality into a calibrated uncertainty signal.
  • TUR-DPO introduces a small learnable reward factorized over these components and plugs it into an uncertainty-weighted, RL-free DPO objective that uses only a fixed or moving reference policy.
  • Experiments on multiple open 7–8B models across reasoning, factual QA, summarization, and helpful/harmless dialogue show higher judge win-rates, improved faithfulness, and better calibration than DPO.
  • The authors report TUR-DPO also yields consistent gains for multimodal and long-context settings and can match or outperform PPO on reasoning-focused tasks while keeping training simpler and avoiding online rollouts.

Abstract

Aligning large language models (LLMs) with human preferences is commonly done via reinforcement learning from human feedback (RLHF) with Proximal Policy Optimization (PPO) or, more simply, via Direct Preference Optimization (DPO). While DPO is stable and RL-free, it treats preferences as flat winner vs. loser signals and is sensitive to noisy or brittle preferences arising from fragile chains of thought. We propose TUR-DPO, a topology- and uncertainty-aware variant of DPO that rewards how answers are derived, not only what they say, by eliciting lightweight reasoning topologies and combining semantic faithfulness, utility, and topology quality into a calibrated uncertainty signal. A small learnable reward is factorized over these signals and incorporated into an uncertainty-weighted DPO objective that remains RL-free and relies only on a fixed or moving reference policy. Empirically, across open 7-8B models and benchmarks spanning mathematical reasoning, factual question answering, summarization, and helpful/harmless dialogue, TUR-DPO improves judge win-rates, faithfulness, and calibration relative to DPO while preserving training simplicity and avoiding online rollouts. We further observe consistent gains in multimodal and long-context settings, and show that TUR-DPO matches or exceeds PPO on reasoning-centric tasks while maintaining operational simplicity.