Bridging Perception and Reasoning: Token Reweighting for RLVR in Multimodal LLMs

arXiv cs.CV / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how extending Reinforcement Learning with Verifiable Rewards (RLVR) to multimodal LLMs is complicated by outputs that mix perception-grounding tokens with reasoning-chain tokens.
  • Token-level experiments show that optimizing only perception-related or only reasoning-related tokens leads to worse results than jointly optimizing the full sequence, indicating strong coupling between the two capabilities.
  • It introduces a plug-and-play Token-Reweighting (ToR) method that identifies critical perception and reasoning tokens and dynamically reweights them during RLVR training to model this interdependence.
  • When combined with existing RLVR-style methods (such as GRPO and DAPO), ToR delivers consistent gains across multiple multimodal reasoning benchmarks.
  • The approach reportedly achieves state-of-the-art results while maintaining both accurate visual grounding and coherent reasoning.

Abstract

Extending Reinforcement Learning with Verifiable Rewards (RLVR) to multimodal large language models (MLLMs) faces a fundamental challenge: their responses inherently interleave perception-related tokens, which ground visual content, with reasoning-related tokens, which construct reasoning chains. These token types instantiate distinct yet interdependent capacities -- visual grounding and symbolic reasoning -- making isolated optimization insufficient. Through token-level empirical analysis, we demonstrate that optimizing either perception- or reasoning-only tokens consistently underperforms full optimization, underscoring their inherent coupling. To address this, we propose a plug-and-play Token-Reweighting (ToR) strategy that explicitly models this interdependence by identifying critical tokens of both types and dynamically reweighting them during RLVR training. Applied on top of existing methods (e.g., GRPO and DAPO), ToR delivers consistent performance gains across multiple multi-modal reasoning benchmarks, achieving state-of-the-art performance with both accurate visual grounding and coherent reasoning.