CRoCoDiL: Continuous and Robust Conditioned Diffusion for Language

arXiv cs.AI / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper introduces CRoCoDiL, a masked diffusion language model that moves diffusion into a continuous, sentence-level semantic latent space to reduce token dependency issues and semantic incoherence common in discrete marginal approaches.
  • CRoCoDiL uses a unified fine-tuning method that jointly trains an encoder–demasker architecture, grounding the demasking step in continuous latent representations and effectively forming a new autoencoder where MDM-based decoding reconstructs text.
  • Building on the same framework, it proposes two unconditional text generation algorithms—ConThenDisc (continuous latent generation then MDM decoding) and ConWithinDisc (iterative refinement of latents during discrete sampling).
  • Experiments on an LLaDA setup report improved generation quality and unconditional sampling speeds that are more than 10x faster versus prior baselines, according to the authors.

Abstract

Masked Diffusion Models (MDMs) provide an efficient non-causal alternative to autoregressive generation but often struggle with token dependencies and semantic incoherence due to their reliance on discrete marginal distributions. We address these limitations by shifting the diffusion process into a continuous sentence-level semantic space. We propose CRoCoDiL (Continuous and Robust Conditioned Diffusion for Language), a unified fine-tuning approach that jointly trains an encoder-demasker architecture, grounding the MDM demasking in continuous latent representations. This leads to the formation of a novel autoencoder in which decoding is obtained by an MDM algorithm. Relying on the same framework, we introduce two unconditional text synthesis algorithms: Continuous-Then-Discrete (ConThenDisc), a hybrid-diffusion approach that first generates latent representations in continuous space and then decodes these to tokens via an MDM, and Continuous-Within-Discrete (ConWithinDisc), a multi-diffusion strategy that refines latent representations throughout the discrete sampling process. Experiments using LLaDA show that our methods achieve superior generation quality and more than 10x faster sampling speeds in an unconditional setting.