Large Language Model Post-Training: A Unified View of Off-Policy and On-Policy Learning

arXiv cs.CL / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper surveys LLM post-training methods and proposes a unified framework based on how they intervene on model behavior rather than on differing objective labels alone.
  • It organizes learning into two regimes—off-policy learning from externally supplied trajectories and on-policy learning from learner-generated rollouts—then further explains methods via roles like effective support expansion and policy reshaping.
  • The authors add a systems-level concept, behavioral consolidation, to describe how techniques preserve, transfer, and amortize behaviors across training stages and model transitions.
  • The framework maps major paradigms (e.g., SFT, preference optimization, on-policy RL, distillation) to these roles, arguing that SFT and preference methods often correspond to different behavioral bottlenecks.
  • The paper concludes that improving post-training increasingly depends on coordinated system/stage design instead of any single dominant training objective.

Abstract

Post-training has become central to turning pretrained large language models (LLMs) into aligned and deployable systems. Recent progress spans supervised fine-tuning (SFT), preference optimization, reinforcement learning (RL), process supervision, verifier-guided methods, distillation, and multi-stage pipelines. Yet these methods are often discussed in fragmented ways, organized by labels or objective families rather than by the behavioral bottlenecks they address. This survey argues that LLM post-training is best understood as structured intervention on model behavior. We organize the field first by trajectory provenance, which defines two primary learning regimes: off-policy learning on externally supplied trajectories, and on-policy learning on learner-generated rollouts. We then interpret methods through two recurring roles -- effective support expansion, which makes useful behaviors more reachable, and policy reshaping, which improves behavior within already reachable regions -- together with a complementary systems-level role, behavioral consolidation, which preserves, transfers, and amortizes behavior across stages and model transitions. This perspective yields a unified reading of major paradigms. SFT may serve either support expansion or policy reshaping, whereas preference-based methods are usually off-policy reshaping. On-policy RL often improves behavior on learner-generated states, though under stronger guidance it can also make hard-to-reach reasoning paths reachable. Distillation is often best understood as consolidation rather than only compression, and hybrid pipelines emerge as coordinated multi-stage compositions. Overall, the framework helps diagnose post-training bottlenecks and reason about stage composition, suggesting that progress in LLM post-training increasingly depends on coordinated system design rather than any single dominant objective.