Large Language Models are Universal Reasoners for Visual Generation

arXiv cs.CV / 5/6/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper notes that recent text-to-image systems, even when unified with an LLM backbone, often struggle to faithfully follow complex prompts during generation despite being good at verifying prompt-image consistency.
  • It formalizes this mismatch as an “understanding-generation gap,” where visual generation and prompt understanding are not sufficiently translated into actionable generation guidance.
  • The authors propose UniReasoner, which uses the LLM as a universal reasoner by producing a coarse visual draft (discrete vision tokens), then performing a self-critique to generate grounded, prompt-consistency feedback.
  • A diffusion model is then conditioned on the prompt, the visual draft, and the critique/evaluation so that generation is steered by explicit corrective signals, improving compositional alignment and semantic faithfulness without sacrificing image quality.

Abstract

Text-to-image generation has advanced rapidly with diffusion models, progressing from CLIP and T5 conditioning to unified systems where a single LLM backbone handles both visual understanding and generation. Despite the architectural unification, these systems frequently fail to faithfully align complex prompts during synthesis, even though they remain highly accurate at verifying whether an image satisfies those same prompts. We formalize this as the \emph{understanding-generation gap} and propose UniReasoner, a framework that leverages the LLM as a universal reasoner to convert its understanding strength into direct generation guidance. Given a prompt, the LLM first produces a coarse visual draft composed of discrete vision tokens. It then performs a self-critique by evaluating the draft for prompt consistency, producing a grounded textual evaluation that pinpoints what needs to be corrected. Finally, a diffusion model is conditioned jointly on the prompt, the visual draft, and the evaluation, ensuring that generation is guided by explicit corrective signals. Each signal addresses a limitation of the other: the draft provides a concrete, scene-level anchor that reduces under-specification in text-only conditioning, while the evaluation turns verification into grounded, actionable constraints that correct omissions, hallucinations, and relational errors. Experiments show that UniReasoner improves compositional alignment and semantic faithfulness under the same diffusion backbone while maintaining image quality, demonstrating a practical way to exploit LLM reasoning to close the understanding-generation gap.