Demystifying OPD: Length Inflation and Stabilization Strategies for Large Language Models

arXiv cs.CL / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • On-policy distillation (OPD) for large language models can suffer a “truncation collapse” failure mode where on-policy rollouts abruptly inflate in length, causing truncated trajectories to dominate training data and destabilize learning.
  • The observed truncation collapse correlates with repetition saturation, producing biased gradient signals that lead to sharp validation performance degradation.
  • The paper attributes the issue to a harmful interaction between student-induced data collection and the distillation objective, which implicitly favors long and repetitive rollouts.
  • To fix this, the authors propose StableOPD, combining a reference-based divergence constraint with rollout mixture distillation to reduce repetition-driven length inflation and stabilize training.
  • Experiments across multiple math reasoning datasets show StableOPD prevents truncation collapse, stabilizes training dynamics, and improves performance by an average of 7.2% versus baseline OPD.

Abstract

On-policy distillation (OPD) trains student models under their own induced distribution while leveraging supervision from stronger teachers. We identify a failure mode of OPD: as training progresses, on-policy rollouts can undergo abrupt length inflation, causing truncated trajectories to dominate the training data. This truncation collapse coincides with abrupt repetition saturation and induces biased gradient signals, leading to severe training instability and sharp degradation in validation performance. We attribute this problem to the interaction between student-induced data collection and the distillation objective, which implicitly favors long and repetitive rollouts. To address this issue, we propose StableOPD, a stabilized OPD framework that combines a reference-based divergence constraint with rollout mixture distillation. These together mitigate repetition-induced length inflation and further stabilize OPD training. Across multiple math reasoning datasets, our approach prevents truncation collapse, stabilizes training dynamics, and improves performance by 7.2% on average.