Locally Coherent Parallel Decoding in Diffusion Language Models

arXiv cs.CL / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper discusses diffusion language models (DLMs) as an alternative to autoregressive models, focusing on how discrete DLMs can achieve sub-linear latency via parallel token prediction.
  • It identifies a key limitation of standard parallel sampling in DLMs: independent sampling from marginal distributions breaks joint dependencies, causing syntactic inconsistencies and malformed multi-token structures.
  • The authors propose CoDiLA (Coherent Diffusion with Local Autoregression), which preserves parallel block generation while enforcing local sequential validity by using a small auxiliary autoregressive model on diffusion latents.
  • CoDiLA aims to maintain the core DLM strengths, including bidirectional modeling across blocks, while delegating fine-grained coherence to the auxiliary AR component.
  • Experiments show that a compact auxiliary AR model (around 0.6B parameters) can largely eliminate coherence artifacts and yields improved accuracy-speed tradeoffs on code generation benchmarks, claiming a new Pareto frontier.

Abstract

Diffusion language models (DLMs) have emerged as a promising alternative to autoregressive (AR) models, offering sub-linear generation latency and bidirectional capabilities that are particularly appealing for code generation and editing. Achieving sub-linear latency in discrete DLMs requires predicting multiple tokens in parallel. However, standard DLMs sample tokens independently from conditional marginal distributions, failing to capture the joint dependencies among concurrently generated tokens. As a result, they often lead to syntactic inconsistencies and break multi-token structures. In this work, we introduce CoDiLA (Coherent Diffusion with Local Autoregression), a method that reconciles parallel sampling with local dependency modeling. Rather than forcing the DLM to resolve fine-grained syntax, CoDiLA delegates local decoding to a small, auxiliary AR model operating on the diffusion latents. This design allows for parallel block generation while ensuring sequential validity within each block and maintaining core DLM capabilities, including bidirectional modeling across blocks. We demonstrate that using a highly compact auxiliary AR model (e.g., 0.6B parameters) effectively eliminates coherence artifacts, establishing a new Pareto frontier for accuracy and speed in code generation benchmarks.