Demystifying the unreasonable effectiveness of online alignment methods

arXiv cs.LG / 4/21/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines why iterative, greedy online alignment methods (e.g., online RLHF and online DPO) perform much better in practice than KL-regularized regret theory suggests.
  • It argues that the mismatch comes from the regret metric: KL-regularized regret mixes the learning/statistical cost with exploration randomness created by a softened (regularized) training policy.
  • To disentangle these effects, the authors use a decision-centric “temperature-zero” (top-ranked at inference) regret criterion.
  • Under this decision-focused criterion, they prove that standard greedy online alignment methods achieve constant (O(1)) cumulative regret.
  • The results offer a clearer theoretical explanation for the empirical effectiveness of greedy alignment approaches by separating best-response identification from regularization-induced stochasticity.

Abstract

Iterative alignment methods based on purely greedy updates are remarkably effective in practice, yet existing theoretical guarantees of \(O(\log T)\) KL-regularized regret can seem pessimistic relative to their empirical performance. In this paper, we argue that this mismatch arises from the regret criterion itself: KL-regularized regret conflates the statistical cost of learning with the exploratory randomization induced by the softened training policy. To separate these effects, we study the traditional temperature-zero regret criterion, which evaluates only the top-ranked response at inference time. Under this decision-centric notion of performance, we prove that standard greedy online alignment methods, including online RLHF and online DPO, achieve constant \((O(1))\) cumulative regret. By isolating the cost of identifying the best response from the stochasticity induced by regularization, our results provide a sharper theoretical explanation for the practical superb efficiency of greedy alignment.