AI Navigate

One Token, Two Fates: A Unified Framework via Vision Token Manipulation Against MLLMs Hallucination

arXiv cs.CV / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper critiques existing training-free methods for reducing MLLM hallucination, noting that improving vision or suppressing language priors alone trades off performance and can introduce noise.
  • It proposes a unified framework focused on vision tokens, built around two latent-representation modules: Synergistic Visual Calibration (SVC) and Causal Representation Calibration (CRC).
  • SVC uses augmented visual tokens to strengthen visuals, while CRC prunes tokens to create latent-space negative samples for correcting internal model biases.
  • The approach restores the vision-language balance and demonstrates about 2% absolute POPE improvement on LLaVA-1.5 across multiple benchmarks, with a 1.06x inference latency overhead.

Abstract

Current training-free methods tackle MLLM hallucination with separate strategies: either enhancing visual signals or suppressing text inertia. However, these separate methods are insufficient due to critical trade-offs: simply enhancing vision often fails against strong language prior, while suppressing language can introduce extra image-irrelevant noise. Moreover, we find their naive combination is also ineffective, necessitating a unified framework. We propose such a framework by focusing on the core asset: the vision token. Our design leverages two key insights: (1) augmented images offer complementary visual semantics, and (2) removing vision tokens (information-gap) isolates hallucination tendencies more precisely than distorting images (modality-gap). Based on these, our framework uses vision tokens in two distinct ways, both operating on latent representations: our Synergistic Visual Calibration (SVC) module incorporates augmented tokens to strengthen visual representations, while our Causal Representation Calibration (CRC) module uses pruned tokens to create latent-space negative samples for correcting internal model biases. By harmonizing these two roles, our framework effectively restores the vision-language balance, significantly reducing object hallucinations, improving POPE accuracy by an average of 2% absolute on LLaVA-1.5 across multiple benchmarks with only a 1.06x inference latency overhead.