AI Navigate

Locate-then-Sparsify: Attribution Guided Sparse Strategy for Visual Hallucination Mitigation

arXiv cs.CV / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Locate-Then-Sparsify for Feature Steering (LTS-FS), a plug-and-play framework that applies layer-wise, attribution-guided feature steering to mitigate visual hallucinations in LVLMs.
  • It develops an attribution method based on causal interventions to quantify each layer's relevance to hallucinations, using a synthetic dataset with token-level and sentence-level hallucination cases.
  • The approach converts layer attribution scores into per-layer steering intensities, enabling targeted adjustments only on hallucination-relevant layers to avoid degrading non-hallucination tasks.
  • Extensive experiments across multiple LVLMs and benchmarks show effective hallucination reduction while preserving strong overall performance.

Abstract

Despite the significant advancements in Large Vision-Language Models (LVLMs), their tendency to generate hallucinations undermines reliability and restricts broader practical deployment. Among the hallucination mitigation methods, feature steering emerges as a promising approach that reduces erroneous outputs in LVLMs without increasing inference costs. However, current methods apply uniform feature steering across all layers. This heuristic strategy ignores inter-layer differences, potentially disrupting layers unrelated to hallucinations and ultimately leading to performance degradation on general tasks. In this paper, we propose a plug-and-play framework called Locate-Then-Sparsify for Feature Steering (LTS-FS), which controls the steering intensity according to the hallucination relevance of each layer. We first construct a synthetic dataset comprising token-level and sentence-level hallucination cases. Based on this dataset, we introduce an attribution method based on causal interventions to quantify the hallucination relevance of each layer. With the attribution scores across layers, we propose a layerwise strategy that converts these scores into feature steering intensities for individual layers, enabling more precise adjustments specifically on hallucination-relevant layers. Extensive experiments across multiple LVLMs and benchmarks demonstrate that our LTS-FS framework effectively mitigates hallucination while preserving strong performance.