Breaking Block Boundaries: Anchor-based History-stable Decoding for Diffusion Large Language Models

arXiv cs.CL / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that semi-autoregressive decoding in diffusion LLMs has inherent block constraints that delay decoding of many cross-block stable tokens.
  • It presents three observations about stable token identification: naive lookahead is unreliable, token stability correlates with convergence trends, and historical information becomes isolated across blocks.
  • To address this, the authors propose Anchor-based History-stable Decoding (AHD), a training-free, plug-and-play dynamic decoding method that tracks token stability via dynamic anchors in real time.
  • Experiments across language, vision-language, and audio-language tasks show AHD improves both performance and inference efficiency and can reverse the typical degradation seen in other advanced acceleration strategies.
  • On the BBH benchmark specifically, AHD reduces decoding steps by 80% while improving performance by 3.67%.

Abstract

Diffusion Large Language Models (dLLMs) have recently become a promising alternative to autoregressive large language models (ARMs). Semi-autoregressive (Semi-AR) decoding is widely employed in base dLLMs and advanced decoding strategies due to its superior performance. However, our observations reveal that Semi-AR decoding suffers from inherent block constraints, which cause the decoding of many cross-block stable tokens to be unnecessarily delayed. To address this challenge, we systematically investigate the identification of stable tokens and present three key findings: (1) naive lookahead decoding is unreliable, (2) token stability closely correlates with convergence trend, and (3) historical information is isolated. Building on these insights, we propose Anchor-based History-stable Decoding (AHD), a training-free, plug-and-play dynamic decoding strategy. Specifically, AHD monitors the stability trend of tokens in real time through dynamic anchors. Once a token reaches stability, it initiates early cross-block decoding to enhance efficiency and performance. Extensive experiments across language, vision-language, and audio-language domains demonstrate that AHD simultaneously improves both performance and inference efficiency. Notably, AHD effectively reverses the performance degradation typically observed in existing advanced decoding acceleration strategies. For instance, on the BBH benchmark, our approach reduces decoding steps by 80% while improving performance by 3.67%.