Controllable Image Generation with Composed Parallel Token Prediction

arXiv cs.LG / 4/8/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a theoretically grounded framework for composing conditional discrete generative processes, with masked generation/absorbing diffusion as a special case.
  • It supports precise control over novel combinations and counts of input conditions beyond what appears in training data, including concept weighting for emphasis or negation of specific conditions.
  • Using a compositional discrete vocabulary from VQ-VAE and VQ-GAN, the method reduces error rate by 63.4% relative to prior state of the art across CLEVR variants and FFHQ, and improves FID by an average absolute -9.58.
  • The approach also reports a 2.3x to 12x real-time speed-up over comparable methods and demonstrates applicability by fine-tuning an open pre-trained discrete text-to-image model for fine-grained text/image control.

Abstract

Conditional discrete generative models struggle to faithfully compose multiple input conditions. To address this, we derive a theoretically-grounded formulation for composing discrete probabilistic generative processes, with masked generation (absorbing diffusion) as a special case. Our formulation enables precise specification of novel combinations and numbers of input conditions that lie outside the training data, with concept weighting enabling emphasis or negation of individual conditions. In synergy with the richly compositional learned vocabulary of VQ-VAE and VQ-GAN, our method attains a 63.4\% relative reduction in error rate compared to the previous state-of-the-art, averaged across 3 datasets (positional CLEVR, relational CLEVR and FFHQ), simultaneously obtaining an average absolute FID improvement of -9.58. Meanwhile, our method offers a 2.3\times to 12\times real-time speed-up over comparable methods, and is readily applied to an open pre-trained discrete text-to-image model for fine-grained control of text-to-image generation.