Walk the Talk: Bridging the Reasoning-Action Gap for Thinking with Images via Multimodal Agentic Policy Optimization

arXiv cs.CV / 4/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current RL training for multimodal agentic reasoning can produce a “reasoning-action gap,” where text looks plausible even when the model takes imprecise or irrelevant visual actions via tools.
  • It proposes Multimodal Agentic Policy Optimization (MAPO), which forces the model to generate explicit textual descriptions of visual observations obtained through tool use during Multimodal Chain-of-Thought (MCoT).
  • MAPO uses a new advantage estimation method that jointly considers semantic alignment between the generated descriptions and actual observations and the task reward to reduce noisy feedback over multi-turn trajectories.
  • The authors provide theoretical justification that MAPO reduces gradient variance and report empirical improvements over multiple visual reasoning benchmarks.
  • Overall, the work targets training stability concerns such as performance degradation from accumulated noise and potential training collapse in multimodal agentic setups.

Abstract

Recent advancements in Multimodal Large Language Models (MLLMs) have incentivized models to ``think with images'' by actively invoking visual tools during multi-turn reasoning. The common Reinforcement Learning (RL) practice of relying on outcome-based rewards ignores the fact that textual plausibility often masks executive failure, meaning that models may exhibit intuitive textual reasoning while executing imprecise or irrelevant visual actions within their agentic reasoning trajectories. This reasoning-action discrepancy introduces noise that accumulates throughout the multi-turn reasoning process, severely degrading the model's multimodal reasoning capabilities and potentially leading to training collapse. In this paper, we introduce Multimodal Agentic Policy Optimization (MAPO), bridging the gap between textual reasoning and visual actions generated by models within their Multimodal Chain-of-Thought (MCoT). Specifically, MAPO mandates the model to generate explicit textual descriptions for the visual content obtained via tool usage. We then employ a novel advantage estimation that couples the semantic alignment between these descriptions and the actual observations with the task reward. Theoretical findings are provided to justify the rationale behind MAPO, which inherently reduces the variance of gradients, and extensive experiments demonstrate that our method achieves superior performance across multiple visual reasoning benchmarks.