Hallucination Early Detection in Diffusion Models

arXiv cs.CV / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Diffusion-based text-to-image models can omit objects when generating multiple entities, leading to hallucinations that are often not addressed effectively by methods that only tune latent representations.
  • The paper proposes HEaD+ (Hallucination Early Detection +), which uses cross-attention maps and textual cues plus a “Predicted Final Image” input to detect incorrect generations early and decide whether to continue or restart with a different seed.
  • HEaD+ is trained on the new InsideGen dataset (45,000 generated images) containing prompts with up to seven objects, enabling targeted early detection for multi-object scenes.
  • Experiments show HEaD+ improves the chance of getting complete images by 6–8% for four-object prompts and can cut generation time by up to 32% when completeness is the goal, compared with leading approaches.
  • An additional integrated localization module predicts object centroids and checks pairwise spatial relations at an intermediate diffusion timestep, using gating to improve consistency with requested relations.

Abstract

Text-to-Image generation has seen significant advancements in output realism with the advent of diffusion models. However, diffusion models encounter difficulties when tasked with generating multiple objects, frequently resulting in hallucinations where certain entities are omitted. While existing solutions typically focus on optimizing latent representations within diffusion models, the relevance of the initial generation seed is typically underestimated. While using various seeds in multiple iterations can improve results, this method also significantly increases time and energy costs. To address this challenge, we introduce HEaD+ (Hallucination Early Detection +), a novel approach designed to identify incorrect generations early in the diffusion process. The HEaD+ framework integrates cross-attention maps and textual information with a novel input, the Predicted Final Image. The objective is to assess whether to proceed with the current generation or restart it with a different seed, thereby exploring multiple-generation seeds while conserving time. HEaD+ is trained on the newly created InsideGen dataset of 45,000 generated images, each containing prompts with up to seven objects. Our findings demonstrate a 6-8% increase in the likelihood of achieving a complete generation (i.e., an image accurately representing all specified subjects) with four objects when applying HEaD+ alongside existing models. Additionally, HEaD+ reduces generation times by up to 32% when aiming for a complete image, enhancing the efficiency of generating complete and accurate object representations relative to leading models. Moreover, we propose an integrated localization module that predicts object centroid positions and verifies pairwise spatial relations (if requested by the users) at an intermediate timestep, gating generation together with object presence to further improve relation-consistent outcomes.