AI Navigate

ES-dLLM: Efficient Inference for Diffusion Large Language Models by Early-Skipping

arXiv cs.AI / 3/12/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The authors analyze diffusion large language models (dLLMs) and find that intermediate representations (keys, values, and hidden states) change only subtly across successive iterations, suggesting potential for reduced computation.
  • They propose ES-dLLM, a training-free inference acceleration framework that skips tokens in early layers based on estimated token importance derived from intermediate tensor variation and confidence scores from prior iterations.
  • Experiments on LLaDA-8B and Dream-7B show ES-dLLM achieving 226.57 and 308.51 tokens per second on an NVIDIA H200, with 5.6× to 16.8× speedups over vanilla inference and up to 1.85× over a state-of-the-art caching method while preserving generation quality.
  • The method emphasizes a no-training-required approach to substantially reduce computation, enabling more efficient deployment of diffusion-based LLMs.

Abstract

Diffusion large language models (dLLMs) are emerging as a promising alternative to autoregressive models (ARMs) due to their ability to capture bidirectional context and the potential for parallel generation. Despite the advantages, dLLM inference remains computationally expensive as the full input context is processed at every iteration. In this work, we analyze the generation dynamics of dLLMs and find that intermediate representations, including key, value, and hidden states, change only subtly across successive iterations. Leveraging this insight, we propose \textbf{ES-dLLM}, a training-free inference acceleration framework for dLLM that reduces computation by skipping tokens in early layers based on the estimated importance. Token importance is computed with intermediate tensor variation and confidence scores of previous iterations. Experiments on LLaDA-8B and Dream-7B demonstrate that ES-dLLM achieves throughput of up to 226.57 and 308.51 tokens per second (TPS), respectively, on an NVIDIA H200 GPU, delivering 5.6\times to 16.8\times speedup over the vanilla implementation and up to 1.85\times over the state-of-the-art caching method, while preserving generation quality.