Focus Matters: Phase-Aware Suppression for Hallucination in Vision-Language Models

arXiv cs.CV / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies why large vision-language models (LVLMs) hallucinate objects not present in input images and shows that prior suppression methods can be slow due to per-input iterative optimization.
  • By analyzing attention dynamics in vision encoders, it identifies a consistent three-phase information-processing pattern—diffusion, focus, and rediffusion—and finds hallucinations are especially sensitive to tokens with low attention during the focus phase.
  • It proposes a training-free, lightweight inference-time intervention that suppresses low-attention tokens during the focus phase, using only statistics from a single forward pass.
  • The method uses a Determinantal Point Process (DPP) to retain diverse visual cues while filtering redundant tokens, aiming to reduce hallucinations without harming caption quality.
  • Experiments across multiple LVLM backbones and decoding strategies show consistent reductions in hallucination metrics with negligible added inference latency and performance comparable to adversarial uncertainty estimation approaches.

Abstract

Large Vision-Language Models (LVLMs) have achieved impressive progress in multimodal reasoning, yet they remain prone to object hallucinations, generating descriptions of objects that are not present in the input image. Recent approaches attempt to mitigate hallucinations by suppressing unreliable visual signals in the vision encoder, but many rely on iterative optimization for each input, resulting in substantial inference latency. In this work, we investigate the internal attention dynamics of vision encoders in LVLMs and identify a consistent three-phase structure of visual information processing: diffusion, focus, and rediffusion. Our analysis reveals that hallucination behavior is particularly sensitive to tokens receiving low attention during the focus phase. Motivated by this observation, we propose a lightweight inference-time intervention that selectively suppresses such tokens during the focus phase. The method operates in a training-free manner using statistics from a single forward pass and employs a Determinantal Point Process (DPP) to preserve diverse visual cues while filtering redundant tokens. Extensive experiments across multiple LVLM backbones and decoding strategies demonstrate that the proposed approach consistently reduces hallucination metrics while maintaining competitive caption quality. Moreover, compared to adversarial uncertainty estimation methods, our approach achieves comparable hallucination mitigation with negligible additional inference latency.