Reflect to Inform: Boosting Multimodal Reasoning via Information-Gain-Driven Verification

arXiv cs.AI / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a failure mode in multimodal LLMs where long-form generation increasingly drifts from image evidence to text priors, causing ungrounded reasoning and hallucinations.
  • Attention-based analysis suggests the models already contain a latent ability for late-stage visual verification, though it is not reliably activated during generation.
  • It introduces Visual Re-Examination (VRE), a self-evolving training framework that uses the model’s own iterative reflection traces to perform visual introspection during reasoning without extra image inputs.
  • Experiments across multiple multimodal benchmarks show VRE improves reasoning accuracy and perceptual reliability while significantly reducing hallucinations, particularly in long reasoning chains.
  • The authors provide an open-source implementation on GitHub, enabling replication and further experimentation.

Abstract

Multimodal Large Language Models (MLLMs) achieve strong multimodal reasoning performance, yet we identify a recurring failure mode in long-form generation: as outputs grow longer, models progressively drift away from image evidence and fall back on textual priors, resulting in ungrounded reasoning and hallucinations. Interestingly, Based on attention analysis, we find that MLLMs have a latent capability for late-stage visual verification that is present but not consistently activated. Motivated by this observation, we propose Visual Re-Examination (VRE), a self-evolving training framework that enables MLLMs to autonomously perform visual introspection during reasoning without additional visual inputs. Rather than distilling visual capabilities from a stronger teacher, VRE promotes iterative self-improvement by leveraging the model itself to generate reflection traces, making visual information actionable through information gain. Extensive experiments across diverse multimodal benchmarks demonstrate that VRE consistently improves reasoning accuracy and perceptual reliability, while substantially reducing hallucinations, especially in long-chain settings. Code is available at https://github.com/Xiaobu-USTC/VRE.