Spotlight and Shadow: Attention-Guided Dual-Anchor Introspective Decoding for MLLM Hallucination Mitigation

arXiv cs.CV / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses hallucinations in multimodal large language models (MLLMs), specifically cases where generated text contradicts visual inputs.
  • It proposes Dual-Anchor Introspective Decoding (DaID), a contrastive decoding approach that calibrates each token using internal “perceptual discrepancies.”
  • DaID selects two guided components—an attention-based “Spotlight” layer to amplify visual factual signals and a “Shadow” layer to suppress ungrounded textual continuation.
  • Using visual attention distributions to drive token-specific dual-anchor adaptation, DaID aims to reduce hallucinations while improving reasoning quality.
  • Experiments on multiple benchmarks and across different MLLMs reportedly show significant hallucination mitigation and stronger general reasoning performance.

Abstract

Multimodal Large Language Models (MLLMs) have demonstrated remarkable reasoning capabilities yet continue to suffer from hallucination, where generated text contradicts visual content. In this paper, we introduce Dual-Anchor Introspective Decoding (DaID), a novel contrastive decoding framework that dynamically calibrates each token generation by mining the model's internal perceptual discrepancies. Specifically, DaID identifies a Spotlight layer to amplify visual factual signals and a Shadow layer to suppress textual inertia. By leveraging visual attention distributions to guide this dual-anchor selection process, our method ensures precise, token-specific adaptation. Experimental results across multiple benchmarks and MLLMs demonstrate that DaID significantly mitigates hallucination while enhancing general reasoning capabilities.