Consistent Diffusion Language Models

arXiv cs.LG / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Consistent Diffusion Language Models (CDLM) as a new diffusion-based alternative to autoregressive language models, targeting faster (often fewer-step) and parallelizable text generation.
  • It argues that discrete diffusion lacks a deterministic probability-flow ODE analogue, so it replaces trajectories with stochastic “posterior bridges” derived in closed form for common corruption processes.
  • The core method, Multi-Path Discrete Consistency (MPDC), trains a denoiser to be path-invariant in expectation across these stochastic bridges, using a single-stage and teacher-free training setup.
  • The authors present a unified objective that links masked diffusion, continuous consistency models, and progressive/discrete distillation as analytic limits or practical approximations of one framework.
  • Experiments show CDLM sets new state-of-the-art results for both conditional and unconditional text generation, with the biggest improvements in the few-step sampling regime and frequent wins even over multi-stage distilled baselines under limited compute.

Abstract

Diffusion language models (DLMs) are an attractive alternative to autoregressive models because they promise sublinear-time, parallel generation, yet practical gains remain elusive as high-quality samples still demand hundreds of refinement steps. In continuous domains, consistency training along the probability-flow ODE is a popular recipe to accelerate diffusion. For discrete diffusion, no analogous sample-space ODE exists, making direct adaptation ill-defined. We argue that the natural discrete substitute is not a deterministic trajectory but its stochastic counterpart: the exact posterior bridge, available in closed form for broad corruption families including masked and uniform diffusion. Building on this observation, we introduce Multi-Path Discrete Consistency (MPDC), a new principle that trains a denoiser to be path-invariant in expectation across these stochastic bridges, and instantiate it as the Consistent Diffusion Language Model (CDLM), a single-stage, teacher-free training framework. A single CDLM objective unifies masked diffusion, continuous consistency models, and progressive/discrete distillation as analytic limits or empirical approximations of one common view. Empirically, CDLM establishes a new state of the art on both conditional and unconditional text-generation, consistently outperforming strong base discrete diffusion models and often even multi-stage distilled baselines across sampling budgets, with the largest gains in the few-step regime. Together, these results position CDLM as a principled and scalable foundation for the next generation of fast, high-fidelity discrete generative modeling.