Mitigating Object Hallucinations in LVLMs via Attention Imbalance Rectification

arXiv cs.CV / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates why large vision-language models (LVLMs) produce object hallucinations and finds that imbalanced attention allocation is strongly causally correlated with hallucination occurrence.
  • It introduces “attention imbalance” as a measurable quantity (including cross-modality and token-level disparity) that also supports visual interpretation of attention patterns linked to hallucinations.
  • To reduce object hallucinations, the authors propose Attention Imbalance Rectification (AIR), a lightweight intervention applied at decoding time that redistributes attention weights to correct both modality-wise and token-wise imbalances.
  • Experiments across four mainstream LVLMs on three benchmarks (CHAIR, POPE, MM-Vet), compared against seven baselines, show consistent hallucination reduction—up to 35.1%—and some improvement in general vision-language capability—up to 15.9%.

Abstract

Object hallucination in Large Vision-Language Models (LVLMs) severely compromises their reliability in real-world applications, posing a critical barrier to their deployment in high-stakes scenarios such as autonomous driving and medical image analysis. Through systematic empirical investigation, we identify that the imbalanced attention allocation, both across modalities (i.e., vision and language) and within modalities (among individual tokens), exhibits a strong causal correlation with the occurrence of object hallucination. Leveraging this insight, we introduce a novel concept termed attention imbalance, which not only quantifies the degree of attention disparity but also visually delineates the underlying patterns (e.g., over-attentiveness to irrelevant language tokens or under-attentiveness to discriminative visual features) that drive object hallucination. To mitigate object hallucination, we further propose Attention Imbalance Rectification (AIR), a lightweight decoding-time intervention method that reallocates attention weights and adjusts attention distributions to rectify modality-wise and token-wise imbalances. Extensive evaluations on four mainstream LVLMs and three benchmarks (CHAIR, POPE, and MM-Vet) with seven baselines demonstrate that AIR consistently reduces object hallucination rates, achieving up to a 35.1% reduction compared to the baselines, while improving up to 15.9% of LVLMs' general capability across diverse vision-language tasks.