KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models

arXiv cs.CL / 3/25/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper introduces KDFlow, a knowledge-distillation framework designed to better compress large language models by improving training/inference efficiency during teacher-student distillation.
  • KDFlow uses a decoupled architecture that combines FSDP2 for training with SGLang for teacher inference, aiming to fully leverage the strengths of each system.
  • To reduce communication overhead, it transmits only the teacher’s hidden states (via zero-copy transfer) and then recomputes logits on the student side.
  • The framework supports both off-policy and on-policy distillation and provides extensible, user-friendly APIs, including support for cross-tokenizer knowledge distillation.
  • Reported experiments show KDFlow achieves a 1.44× to 6.36× speedup over existing KD frameworks, with code made available on GitHub.

Abstract

Knowledge distillation (KD) is an essential technique to compress large language models (LLMs) into smaller ones. However, despite the distinct roles of the student model and the teacher model in KD, most existing frameworks still use a homogeneous training backend (e.g., FSDP and DeepSpeed) for both models, leading to suboptimal training efficiency. In this paper, we present a novel framework for LLM distillation, termed \textbf{KDFlow}, which features a decoupled architecture and employs SGLang for teacher inference. By bridging the training efficiency of FSDP2 and the inference efficiency of SGLang, KDFlow achieves full utilization of both advantages in a unified system. Moreover, instead of transferring full logits across different processes, our framework only transmits the teacher's hidden states using zero-copy data transfer and recomputes the logits on the student side, effectively balancing the communication cost and KD performance. Furthermore, our framework supports both off-policy and on-policy distillation and incorporates KD algorithms for cross-tokenizer KD through highly extensible and user-friendly APIs. Experiments show that KDFlow can achieve \textbf{1.44\times to 6.36\times} speedup compared to current KD frameworks, enabling researchers to rapidly prototype and scale LLM distillation with minimal engineering overhead. Code is available at: https://github.com/songmzhang/KDFlow