Visual Attention Drifts,but Anchors Hold:Mitigating Hallucination in Multimodal Large Language Models via Cross-Layer Visual Anchors

arXiv cs.CV / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes why multimodal LLMs hallucinate objects by studying how visual attention evolves across layers and concluding that deep-layer attention drifts back toward early-layer noise.
  • It argues that output reliability improves when the model captures “visual anchors” at intermediate layers, rather than relying on final-layer attention.
  • The authors introduce CLVA (Cross-Layer Visual Anchors), a training-free method that reinforces mid-layer features and suppresses regressive noise to pull deep-layer attention toward correct visual regions.
  • Experiments across multiple architectures and benchmarks show strong hallucination mitigation performance without a meaningful increase in compute time or GPU memory usage.

Abstract

Multimodal Large Language Models often suffer from object hallucination. While existing research utilizes attention enhancement and visual retracing, we find these works lack sufficient interpretability regarding attention drift in final model stages. In this paper, we investigate the layer wise evolution of visual features and discover that hallucination stems from deep layer attention regressing toward initial visual noise from early layers. We observe that output reliability depends on acquiring visual anchors at intermediate layers rather than final layers. Based on these insights, we propose CLVA, which stands for Cross-Layer Visual Anchors, a training free method that reinforces critical mid layer features while suppressing regressive noise. This approach effectively pulls deep layer attention back to correct visual regions by utilizing essential anchors captured from attention dynamics. We evaluate our method across diverse architectures and benchmarks, demonstrating outstanding performance without significant increase in computational time and GPU memory.