Rethinking Token-Level Policy Optimization for Multimodal Chain-of-Thought

arXiv cs.CV / 3/25/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current multimodal Chain-of-Thought RLVR approaches optimize reasoning at too coarse a granularity, failing to distinguish tokens with different levels of visual grounding.
  • It provides a token-level analysis showing that successful multimodal reasoning exhibits structured token dynamics that jointly reflect perceptual grounding and exploratory inference.
  • The proposed method, Perception-Exploration Policy Optimization (PEPO), builds a perception prior from hidden-state similarity and uses a smooth gating mechanism with token entropy to assign token-level advantages.
  • PEPO plugs into existing RLVR frameworks (e.g., GRPO and DAPO) without requiring extra supervision or auxiliary model components.
  • Experiments on multiple multimodal benchmarks report consistent, robust gains over strong RL baselines while keeping training stable across tasks like geometry reasoning, visual grounding, puzzles, and few-shot classification.

Abstract

Multimodal Chain-of-Thought (CoT) reasoning requires large vision-language models to construct reasoning trajectories that interleave perceptual grounding with multi-step inference. However, existing Reinforcement Learning with Verifiable Rewards (RLVR) methods typically optimize reasoning at a coarse granularity, treating CoT uniformly without distinguishing their varying degrees of visual grounding. In this work, we conduct a token-level analysis of multimodal reasoning trajectories and show that successful reasoning is characterized by structured token dynamics reflecting both perceptual grounding and exploratory inference. Building upon this analysis, we propose Perception-Exploration Policy Optimization (PEPO), which derives a perception prior from hidden state similarity and integrates it with token entropy through a smooth gating mechanism to produce token-level advantages. PEPO integrates seamlessly with existing RLVR frameworks such as GRPO and DAPO, requiring neither additional supervision nor auxiliary branches. Extensive experiments across diverse multimodal benchmarks demonstrate consistent and robust improvements over strong RL baselines, spanning geometry reasoning, visual grounding, visual puzzle solving, and few-shot classification, while maintaining stable training dynamics. Code: https://github.com/xzxxntxdy/PEPO