Gumbel Distillation for Parallel Text Generation

arXiv cs.CL / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “Gumbel Distillation,” a model-agnostic distillation method designed to improve the generation quality of parallel (non-autoregressive) language models.
  • It uses the Gumbel-Max trick to create a deterministic mapping from latent Gumbel noise to output tokens generated by a high-performing autoregressive (AR) teacher.
  • The authors report substantial quality gains in experiments on LM1B and OpenWebText, including a 30.0% MAUVE score improvement and a 10.5% generative perplexity improvement over a MDLM baseline.
  • The method is described as compatible with multiple parallel decoding architectures, specifically including MDLM and BD3-LM, and the code is released publicly.

Abstract

The slow, sequential nature of autoregressive (AR) language models has driven the adoption of parallel decoding methods. However, these non-AR models often sacrifice generation quality as they struggle to model the complex joint distribution of token sequences. To narrow this performance gap, we introduce Gumbel Distillation, a novel distillation technique that enables parallel decoders to learn this distribution effectively. Our method leverages the Gumbel-Max trick to create a deterministic mapping from a latent Gumbel noise space to the output tokens of a high-performing AR teacher. As a model-agnostic technique, Gumbel Distillation seamlessly integrates with diverse parallel decoding architectures, including MDLM and BD3-LM. Experiments on LM1B and OpenWebText show that Gumbel Distillation substantially improves the generation quality of parallel language models, achieving a 30.0% improvement in MAUVE score and 10.5% in generative perplexity over MDLM trained on OpenWebText dataset. Code available at https://github.com/hxixixh/gumbel-distill.