The Thinking Pixel: Recursive Sparse Reasoning in Multimodal Diffusion Latents

arXiv cs.CV / 4/29/2026

📰 NewsModels & Research

Key Points

  • The paper argues that diffusion models, despite excelling at high-fidelity synthesis, struggle with complex structured reasoning such as text-following in multimodal text-to-image generation.
  • It proposes a recursive sparse mixture-of-experts (MoE) mechanism integrated into standard diffusion models, adding recursion inside joint-attention layers to iteratively refine image (visual) tokens across latent steps.
  • A gating network dynamically selects specialized neural modules at each step based on current visual tokens, the diffusion timestep, and conditioning information, while parameter sharing is made efficient through sparse expert selection.
  • Experiments on class-conditioned ImageNet and evaluations using GenEval and DPG benchmarks show improved image generation performance compared with prior approaches.
  • Overall, the work extends “latent reasoning” and recursive strategies from language models to multimodal diffusion by designing a discrete-free, continuous-token-friendly recursive MoE framework.

Abstract

Diffusion models have achieved success in high-fidelity data synthesis, yet their capacity for more complex, structured reasoning like text following tasks remains constrained. While advances in language models have leveraged strategies such as latent reasoning and recursion to enhance text understanding capabilities, extending these to multimodal text-to-image generation tasks is challenging due to the continuous and non-discrete nature of visual tokens. To tackle this problem, we draw inspiration from modular human cognition and propose a recursive, sparse mixture-of-experts framework integrated into conventional diffusion models. Our approach introduces a recursive component within joint attention layers that iteratively refines visual tokens over multiple latent steps while efficiently sharing parameters via sparse selection of neural modules. At each step, a gating network is devised to dynamically select specialized neural modules, conditioned on the current visual tokens, the diffusion timestep, and the conditioning information. Comprehensive evaluation on class-conditioned ImageNet image generation tasks and additional studies on the GenEval and DPG benchmark demonstrate the superiority of the proposed method in enhancing model image generation performance.