Mitigating Entangled Steering in Large Vision-Language Models for Hallucination Reduction

arXiv cs.CV / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Large vision-language models (LVLMs) still produce hallucinations—text that conflicts with visual evidence—despite prior mitigation methods.
  • The paper argues that hallucination suppression often harms generation behavior because the steering signals are entangled, which shifts token distributions and can shorten outputs.
  • It introduces MESA, a plug-and-play framework that performs controlled, selective latent interventions targeted to hallucination-relevant responses.
  • Experiments across multiple LVLM families and diverse benchmarks show MESA reduces hallucinations while better preserving the models’ original generation/token distributions.
  • The approach is positioned as maintaining the intrinsic generation behavior, improving upon prior latent steering or hallucination mitigation techniques.

Abstract

Large Vision-Language Models (LVLMs) have achieved remarkable success across cross-modal tasks but remain hindered by hallucinations, producing textual outputs inconsistent with visual content. Existing methods mitigate hallucinations but often alter generation behavior, resulting in shorter outputs and shifted token distributions, especially in latent space steering approaches. We identify that this issue stems from entangled steering signals, where suppressing hallucinations inadvertently disrupts the model's intrinsic generation behavior. To address this, we propose MESA, an effective plug-and-play framework that performs controlled and selective latent intervention for hallucination mitigation. Specifically, MESA targets hallucination-relevant responses while preserving the model's original token distribution, enabling effective hallucination reduction without compromising generation behavior. Extensive experiments across diverse generative and discriminative benchmarks demonstrate that MESA consistently reduces hallucinations while better preserving generation behavior, outperforming prior methods across multiple LVLM families.