Mitigating Multimodal LLMs Hallucinations via Relevance Propagation at Inference Time

arXiv cs.LG / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses hallucinations in multimodal LLMs by arguing that, during inference, over-reliance on textual tokens weakens grounding in perceptual inputs (vision/audio).
  • It introduces LIME (Learning Inference-time Modality Enhancement), a training-free method that uses Layer-wise Relevance Propagation (LRP) to measure token-level contributions and drive the model toward higher perceptual reliance.
  • LIME enforces its relevance-based goal via inference-time updates to key-value representations, without changing model parameters or requiring extra training data.
  • Experiments on multiple vision and audio multimodal benchmarks show that LIME consistently reduces hallucinations and improves grounding while maintaining overall generation quality.
  • The analysis indicates that LIME increases modality contribution and yields more localized, semantically aligned relevance patterns.

Abstract

Multimodal large language models (MLLMs) have revolutionized the landscape of AI, demonstrating impressive capabilities in tackling complex vision and audio-language tasks. However, a critical challenge remains: these models often suffer from hallucinations, generating outputs that diverge from the provided perceptual inputs. This tendency stems from an inherent imbalance in modality utilization during inference, where the dominance of textual tokens undermines the potential of perceptual inputs. As a result, the model frequently resorts to textual language priors at the expense of grounded evidence. To tackle this issue, we propose Learning Inference-time Modality Enhancement (LIME), a training-free framework designed to bolster multimodal grounding by explicitly enhancing modality usage during decoding. LIME leverages Layer-wise Relevance Propagation (LRP) to quantify token-level contributions and defines a relevance-based objective that promotes increased reliance on perceptual inputs. This objective is enforced through inference-time updates to the model's key-value representations, without modifying model parameters or requiring additional training data. We evaluate LIME across multiple multimodal benchmarks in both vision and audio domains, demonstrating consistent reductions in hallucinations and enhanced grounding while preserving generation quality. Further analysis shows that LIME increases modality contribution and produces more localized and semantically aligned relevance patterns.