Prefill-Time Intervention for Mitigating Hallucination in Large Vision-Language Models

arXiv cs.CV / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses hallucinations in large vision-language models (LVLMs) and argues that existing steering methods worsen residual hallucinations because they act only during decoding, allowing errors to accumulate autoregressively.
  • It introduces Prefill-Time Intervention (PTI), which applies intervention only once during the prefill stage to enhance the initial KV cache before hallucination errors compound.
  • PTI is modality-aware, using different steering directions for visual versus textual representations, and separately steering keys to visually grounded objects while using values to filter background noise.
  • Experiments show that PTI substantially reduces hallucinations and generalizes across multiple decoding strategies, LVLMs, and benchmarks.
  • The method is orthogonal to existing decoding-stage techniques, making it a plug-and-play addition that can further improve results, with code released on GitHub.

Abstract

Large Vision-Language Models (LVLMs) have achieved remarkable progress in visual-textual understanding, yet their reliability is critically undermined by hallucinations, i.e., the generation of factually incorrect or inconsistent responses. While recent studies using steering vectors demonstrated promise in reducing hallucinations, a notable challenge remains: they inadvertently amplify the severity of residual hallucinations. We attribute this to their exclusive focus on the decoding stage, where errors accumulate autoregressively and progressively worsen subsequent hallucinatory outputs. To address this, we propose Prefill-Time Intervention (PTI), a novel steering paradigm that intervenes only once during the prefill stage, enhancing the initial Key-Value (KV) cache before error accumulation occurs. Specifically, PTI is modality-aware, deriving distinct directions for visual and textual representations. This intervention is decoupled to steer keys toward visually-grounded objects and values to filter background noise, correcting hallucination-prone representations at their source. Extensive experiments demonstrate PTI's significant performance in mitigating hallucinations and its generalizability across diverse decoding strategies, LVLMs, and benchmarks. Moreover, PTI is orthogonal to existing decoding-stage methods, enabling plug-and-play integration and further boosting performance. Code is available at: https://github.com/huaiyi66/PTI.