$R^2$-dLLM: Accelerating Diffusion Large Language Models via Spatio-Temporal Redundancy Reduction

arXiv cs.CL / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies that diffusion LLM (dLLM) decoding latency is largely driven by recurring redundancy, including spatial redundancy (e.g., confidence clusters and positional ambiguity) and temporal redundancy (e.g., repeatedly remasking already-stable predictions).
  • It proposes $R^2$-dLLM, a unified framework that reduces decoding redundancy from both inference (training-free decoding rules that aggregate local confidence and finalize temporally stable tokens) and training (redundancy-aware supervised fine-tuning).
  • The approach is designed to lower dependence on manually tuned thresholds by aligning the model with more efficient decoding trajectories during fine-tuning.
  • Experiments on multiple models and tasks show up to a 75% reduction in the number of decoding steps while keeping generation quality competitive with existing strategies.
  • Overall, the work argues that decoding redundancy is a central practical bottleneck for dLLMs and that explicitly targeting it can deliver substantial real-world efficiency gains.

Abstract

Diffusion Large Language Models (dLLMs) have emerged as a promising alternative to autoregressive generation by enabling parallel token prediction. However, practical dLLM decoding still suffers from high inference latency, which limits deployment. In this work, we observe that a substantial part of this inefficiency comes from recurring redundancy in the decoding process, including spatial redundancy caused by confidence clusters and positional ambiguity, and temporal redundancy caused by repeatedly remasking predictions that have already stabilized. Motivated by these patterns, we propose R^2-dLLM, a unified framework for reducing decoding redundancy from both inference and training perspectives. At inference time, we introduce training-free decoding rules that aggregate local confidence and token predictions, and finalize temporally stable tokens to avoid redundant decoding steps. We further propose a redundancy-aware supervised fine-tuning pipeline that aligns the model with efficient decoding trajectories and reduces reliance on manually tuned thresholds. Experiments demonstrate that R^2-dLLM consistently reduces the number of decoding steps by up to 75% compared to existing decoding strategies, while maintaining competitive generation quality across different models and tasks. These results validate that decoding redundancy is a central bottleneck in dLLMs, and that explicitly reducing it yields substantial practical efficiency gains.

$R^2$-dLLM: Accelerating Diffusion Large Language Models via Spatio-Temporal Redundancy Reduction | AI Navigate