Distillation Traps and Guards: A Calibration Knob for LLM Distillability

arXiv cs.LG / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes why knowledge distillation from LLM teachers to smaller student models can fail unpredictably, identifying key “distillation traps” that distort training signals.
  • It pinpoints the most fundamental issue as the teacher–student gap, which can lead to overconfident hallucinations, self-correction collapse, and local decoding degradation.
  • The authors propose a post-hoc calibration approach that uses reinforcement fine-tuning (RFT) to control a teacher model’s distillability, aiming to make KD behavior more reliable.
  • The method optimizes a combined objective (task utility, KL anchor, and cross-tokenizer calibration reward), and experiments show improved student performance when teachers are calibrated and distillable.
  • When teachers are calibrated to be undistillable, the teacher retains task performance while distilled students collapse, suggesting a practical lever for safer model IP protection.

Abstract

Knowledge distillation (KD) transfers capabilities from large language models (LLMs) to smaller students, yet it can fail unpredictably and also underpins model leakage risks. Our analysis revealed several distillation traps: tail noise, off-policy instability, and, most fundamentally, the teacher-student gap, that distort training signals. These traps manifest as overconfident hallucinations, self-correction collapse, and local decoding degradation, causing distillation to fail. Motivated by these findings, we propose a post-hoc calibration method that, to the best of our knowledge, for the first time enables control over a teacher's distillability via reinforcement fine-tuning (RFT). Our objective combines task utility, KL anchor, and across-tokenizer calibration reward. This makes distillability a practical safety lever for foundation models, connecting robust teacher-student transfer with deployment-aware model protection. Experiments across math, knowledge QA, and instruction-following tasks show that students distilled from distillable calibrated teachers outperform SFT and KD baselines, while undistillable calibrated teachers retain their task performance but cause distilled students to collapse, offering a practical knob for both better KD and model IP protection.