AI Navigate

Scaling Reasoning Efficiently via Relaxed On-Policy Distillation

arXiv cs.LG / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces REOPOLD (Relaxed On-Policy Distillation), reframing on-policy distillation as policy optimization by using the teacher-student log-likelihood ratio as a token reward.
  • It stabilizes optimization by relaxing strict imitation constraints through mixture-based reward clipping and entropy-based token-level dynamic sampling.
  • It also uses a unified exploration-to-refinement training strategy to balance exploration and refinement during learning.
  • Empirically, REOPOLD delivers 6.7–12x higher sample efficiency and up to 3.32x inference speedups, enabling a 7B student to match a 32B teacher on visual reasoning across math, vision, and tool-use tasks.

Abstract

On-policy distillation is pivotal for transferring reasoning capabilities to capacity-constrained models, yet remains prone to instability and negative transfer. We show that on-policy distillation can be interpreted, both theoretically and empirically, as a form of policy optimization, where the teacher-student log-likelihood ratio acts as a token reward. From this insight, we introduce REOPOLD (Relaxed On-Policy Distillation) a framework that stabilizes optimization by relaxing the strict imitation constraints of standard on-policy distillation. Specifically, REOPOLD temperately and selectively leverages rewards from the teacher through mixture-based reward clipping, entropy-based token-level dynamic sampling, and a unified exploration-to-refinement training strategy. Empirically, REOPOLD surpasses its baselines with superior sample efficiency during training and enhanced test-time scaling at inference, across mathematical, visual, and agentic tool-use reasoning tasks. Specifically, REOPOLD outperforms recent RL approaches achieving 6.7~12x greater sample efficiency and enables a 7B student to match a 32B teacher in visual reasoning with a ~3.32x inference speedup.