DP-OPD: Differentially Private On-Policy Distillation for Language Models

arXiv cs.LG / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Differentially Private On-Policy Distillation (DP-OPD), a synthesis-free method for compressing LLMs with differential privacy enforced only during DP-SGD training of the student model.
  • DP-OPD uses a frozen teacher to provide dense token-level targets on student-generated trajectories (on-policy), addressing utility loss commonly seen with DP-SGD applied to autoregressive generation.
  • The method instantiates “private generalized knowledge distillation” on continuation tokens and is evaluated under a strict privacy budget (ε=2.0).
  • Results show DP-OPD improves perplexity over DP fine-tuning and off-policy DP distillation, and outperforms synthesis-based DP distillation while simplifying the training pipeline.
  • The authors claim DP-OPD effectively collapses private compression into a single DP student-training loop by eliminating DP teacher training and offline synthetic text generation, with code planned for release after publication.

Abstract

Large language models (LLMs) are increasingly adapted to proprietary and domain-specific corpora that contain sensitive information, creating a tension between formal privacy guarantees and efficient deployment through model compression. Differential privacy (DP), typically enforced via DP-SGD, provides record-level protection but often incurs substantial utility loss in autoregressive generation, where optimization noise can amplify exposure bias and compounding errors along long rollouts. Existing approaches to private distillation either apply DP-SGD to both teacher and student, worsening computation and the privacy--utility tradeoff, or rely on DP synthetic text generation from a DP-trained teacher, avoiding DP on the student at the cost of DP-optimizing a large teacher and introducing an offline generation pipeline. We propose \textbf{Differentially Private On-Policy Distillation (DP-OPD)}, a synthesis-free framework that enforces privacy solely through DP-SGD on the student while leveraging a frozen teacher to provide dense token-level targets on \emph{student-generated} trajectories. DP-OPD instantiates this idea via \emph{private generalized knowledge distillation} on continuation tokens. Under a strict privacy budget (\varepsilon=2.0), DP-OPD improves perplexity over DP fine-tuning and off-policy DP distillation, and outperforms synthesis-based DP distillation (Yelp: 44.15\rightarrow41.68; BigPatent: 32.43\rightarrow30.63), while substantially simplifying the training pipeline. In particular, \textbf{DP-OPD collapses private compression into a single DP student-training loop} by eliminating DP teacher training and offline synthetic text generation. Code will be released upon publication at https://github.com/khademfatemeh/dp_opd.