Bridging SFT and RL: Dynamic Policy Optimization for Robust Reasoning

arXiv cs.LG / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Large language model post-training methods face a bias–variance dilemma: supervised fine-tuning (SFT) is stable but biased, while reinforcement learning (RL) explores but has high gradient variance.
  • The paper introduces DYPO (Dynamic Policy Optimization), a unified framework that addresses this conflict via Group Alignment Loss (GAL), Multi-Teacher Distillation, and a reward-driven exploitation–exploration gating mechanism.
  • A theoretical analysis claims DYPO can linearly reduce fitting bias while minimizing overall variance by structuring how SFT and RL signals are combined.
  • Experiments on complex reasoning and out-of-distribution tasks show DYPO improves average performance by 4.8% and 13.3% respectively compared with traditional sequential pipelines.
  • The authors provide public code for DYPO, enabling researchers to test and extend the approach on their own LLM post-training setups.

Abstract

Post-training paradigms for Large Language Models (LLMs), primarily Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), face a fundamental dilemma: SFT provides stability (low variance) but suffers from high fitting bias, while RL enables exploration (low bias) but grapples with high gradient variance. Existing unified optimization strategies often employ naive loss weighting, overlooking the statistical conflict between these distinct gradient signals. In this paper, we provide a rigorous theoretical analysis of this bias-variance trade-off and propose \textbf{DYPO} (Dynamic Policy Optimization), a unified framework designed to structurally mitigate this conflict. DYPO integrates three core components: (1) a \textit{Group Alignment Loss (GAL)} that leverages intrinsic group dynamics to significantly reduce RL gradient variance; (2) a \textit{Multi-Teacher Distillation} mechanism that corrects SFT fitting bias via diverse reasoning paths; and (3) a \textit{Dynamic Exploitation-Exploration Gating} mechanism that adaptively arbitrates between stable SFT and exploratory RL based on reward feedback. Theoretical analysis confirms that DYPO linearly reduces fitting bias and minimizes overall variance. Extensive experiments demonstrate that DYPO significantly outperforms traditional sequential pipelines, achieving an average improvement of 4.8\% on complex reasoning benchmarks and 13.3\% on out-of-distribution tasks. Our code is publicly available at https://github.com/Tocci-Zhu/DYPO.