MSRL: Scaling Generative Multimodal Reward Modeling via Multi-Stage Reinforcement Learning

arXiv cs.CV / 3/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • 提案論文では、マルチモーダル報酬モデル(MRM)の大規模化におけるボトルネックである高コストなマルチモーダル選好データを削減するため、多段階の強化学習(MSRL)を導入します。
  • MSRLは、まず大規模なテキスト選好データから報酬推論能力を学習し、その後「キャプション段階」「完全マルチモーダル段階」と段階的に知識を移すことで、RLVR系トレーニングをマルチモーダルへスケールさせます。
  • さらに、クロスモーダル知識蒸留により選好一般化を改善し、限定的なマルチモーダルデータでも性能を伸ばすことを狙います。

Abstract

Recent advances in multimodal reward modeling have been largely driven by a paradigm shift from discriminative to generative approaches. Building on this progress, recent studies have further employed reinforcement learning from verifiable rewards (RLVR) to enhance multimodal reward models (MRMs). Despite their success, RLVR-based training typically relies on labeled multimodal preference data, which are costly and labor-intensive to obtain, making it difficult to scale MRM training. To overcome this limitation, we propose a Multi-Stage Reinforcement Learning (MSRL) approach, which can achieve scalable RL for MRMs with limited multimodal data. MSRL replaces the conventional RLVR-based training paradigm by first learning a generalizable reward reasoning capability from large-scale textual preference data, and then progressively transferring this capability to multimodal tasks through caption-based and fully multimodal reinforcement-learning stages. Furthermore, we introduce a cross-modal knowledge distillation approach to improve preference generalization within MSRL. Extensive experiments demonstrate that MSRL effectively scales the RLVR-based training of generative MRMs and substantially improves their performance across both visual understanding and visual generation tasks (e.g., from 66.6% to 75.9% on VL-RewardBench and from 70.2% to 75.7% on GenAI-Bench), without requiring additional multimodal preference annotations. Our code is available at: https://github.com/wangclnlp/MSRL.
広告