Proximal Point Nash Learning from Human Feedback

arXiv stat.ML / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper argues that standard RLHF approaches using learned reward models (often tied to Bradley–Terry-style preference assumptions) may poorly reflect real human preference behaviors such as intransitivity.
  • It proposes Nash Learning from Human Feedback (NLHF), treating RLHF as a game-theoretic task of finding a Nash equilibrium defined by human preferences, and studies this under a realistic policy parametrization setup.
  • The authors analyze a self-play policy gradient method (equivalent to Online IPO), proving high-probability last-iterate convergence while identifying a potential stability limitation in the dynamics.
  • To address stability concerns, they introduce a proximal point framework (yielding a stabilized algorithm called Nash Prox) and prove high-probability last-iterate convergence for the combined method.
  • They apply Nash Prox to large language model post-training and report empirical validation of its performance.

Abstract

Traditional Reinforcement Learning from Human Feedback (RLHF) often relies on reward models, frequently assuming preference structures like the Bradley--Terry model, which may not accurately capture the complexities of real human preferences (e.g., intransitivity). Nash Learning from Human Feedback (NLHF) offers a more direct alternative by framing the problem as finding a Nash equilibrium of a game defined by these preferences. While many works study the Nash learning problem directly in the policy space, we instead consider it under a more realistic policy parametrization setting. We first analyze a simple self-play policy gradient method, which is equivalent to Online IPO. We establish high-probability last-iterate convergence guarantees for this method, but our analysis also reveals a possible stability limitation of the underlying dynamics. Motivated by this, we embed the self-play updates into a proximal point framework, yielding a stabilized algorithm. For this combined method, we prove high-probability last-iterate convergence and discuss its more practical version, which we call Nash Prox. Finally, we apply this method to post-training of large language models and validate its empirical performance.