ACT Now: Preempting LVLM Hallucinations via Adaptive Context Integration

arXiv cs.CV / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights that large vision-language models (LVLMs) often produce severe hallucinations, and argues that prior fixes using static, single-step context handling are insufficient for dynamically changing generation states.
  • It introduces ACT (Adaptive Context Integration), a training-free inference method that adaptively integrates contextual signals during decoding to preempt hallucinations.
  • ACT combines “visual context exploration,” using spatio-temporal profiling to amplify attention heads tied to visual exploration, with “semantic context aggregation,” which marginalizes semantic queries to better align vision evidence.
  • Experiments across multiple LVLMs report that ACT substantially reduces hallucinations while maintaining competitive performance on both discriminative and generative benchmarks.
  • The approach is positioned as robust and adaptable because it does not require additional training and does not compromise the core generation behavior of the underlying models.

Abstract

Large Vision-Language Models (LVLMs) frequently suffer from severe hallucination issues. Existing mitigation strategies predominantly rely on isolated, single-step states to enhance visual focus or suppress strong linguistic priors. However, these static approaches neglect dynamic context changes across the generation process and struggles to correct inherited information loss. To address this limitation, we propose Adaptive Context inTegration (ACT), a training-free inference intervention method that mitigates hallucination through the adaptive integration of contextual information. Specifically, we first propose visual context exploration, which leverages spatio-temporal profiling to adaptively amplify attention heads responsible for visual exploration. To further facilitate vision-language alignment, we propose semantic context aggregation that marginalizes potential semantic queries to effectively aggregate visual evidence, thereby resolving the information loss caused by the discrete nature of token prediction. Extensive experiments across diverse LVLMs demonstrate that ACT significantly reduces hallucinations and achieves competitive results on both discriminative and generative benchmarks, acting as a robust and highly adaptable solution without compromising fundamental generation capabilities.