Visually-Guided Policy Optimization for Multimodal Reasoning

arXiv cs.CL / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a key limitation of current RL with verifiable rewards (RLVR) for vision-language models: text-dominated training leads to weak visual faithfulness and sparse attention to visual tokens.
  • It further shows that “temporal visual forgetting” across reasoning steps worsens this issue, making later-step visual grounding less reliable.
  • The authors propose Visually-Guided Policy Optimization (VGPO), which uses a Visual Attention Compensation mechanism based on visual similarity to better localize and amplify visual cues.
  • VGPO also progressively increases visual expectations over later reasoning steps to mitigate visual forgetting.
  • Experiments report improved visual activation and stronger performance on mathematical multimodal reasoning and other visual-dependent tasks.

Abstract

Reinforcement learning with verifiable rewards (RLVR) has significantly advanced the reasoning ability of vision-language models (VLMs). However, the inherent text-dominated nature of VLMs often leads to insufficient visual faithfulness, characterized by sparse attention activation to visual tokens. More importantly, our empirical analysis reveals that temporal visual forgetting along reasoning steps exacerbates this deficiency. To bridge this gap, we propose Visually-Guided Policy Optimization (VGPO), a novel framework to reinforce visual focus during policy optimization. Specifically, VGPO initially introduces a Visual Attention Compensation mechanism that leverages visual similarity to localize and amplify visual cues, while progressively elevating visual expectations in later steps to counteract visual forgetting. Building on this mechanism, we implement a dual-grained advantage re-weighting strategy: the intra-trajectory level highlights tokens exhibiting relatively high visual activation, while the inter-trajectory level prioritizes trajectories demonstrating superior visual accumulation. Extensive experiments demonstrate that VGPO achieves better visual activation and superior performance in mathematical multimodal reasoning and visual-dependent tasks.