A Survey of On-Policy Distillation for Large Language Models

arXiv cs.LG / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper surveys on-policy distillation (OPD) methods for large language models, contrasting them with the dominant off-policy distillation paradigm that can suffer from exposure bias at inference time.
  • It proposes a unified framework based on f-divergences to organize OPD approaches using three dimensions: type of feedback signal, level of teacher access, and loss granularity.
  • The survey reviews how different OPD variants (e.g., divergence minimization, reward-guided learning, self-play) fit into this taxonomy, and analyzes representative methods and reported industrial deployments.
  • It highlights key open research problems such as deriving distillation scaling laws, improving uncertainty-aware feedback, and extending distillation to the level of full agents rather than only token/sequence outputs.

Abstract

Knowledge distillation has become a primary mechanism for transferring reasoning and domain expertise from frontier Large Language Models (LLMs) to smaller, deployable students. However, the dominant paradigm remains \textit{off-policy}: students train on static teacher-generated data and never encounter their own errors during learning. This train--test mismatch, an instance of \textit{exposure bias}, causes prediction errors to compound autoregressively at inference time. On-Policy Distillation (OPD) addresses this by letting the student generate its own trajectories and receive teacher feedback on these self-generated outputs, grounding distillation in the theory of interactive imitation learning. Despite rapid growth spanning divergence minimization, reward-guided learning, and self-play, the OPD literature remains fragmented with no unified treatment. This survey provides the first comprehensive overview of OPD for LLMs. We introduce a unified f-divergence framework over on-policy samples and organize the landscape along three orthogonal dimensions: \emph{feedback signal} (logit-based, outcome-based, or self-play), \emph{teacher access} (white-box, black-box, or teacher-free), and \emph{loss granularity} (token-level, sequence-level, or hybrid). We systematically analyze representative methods, examine industrial deployments, and identify open problems including distillation scaling laws, uncertainty-aware feedback, and agent-level distillation.

A Survey of On-Policy Distillation for Large Language Models | AI Navigate