Prioritizing the Best: Incentivizing Reliable Multimodal Reasoning by Rewarding Beyond Answer Correctness

arXiv cs.CL / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a multimodal RL approach that addresses a mismatch between answer correctness and reasoning validity, termed “reasoning-answer inconsistency.”
  • It compares two reward strategies for reinforcement learning with verifiable rewards—reward models (RMs) and generative rewards (GRs)—noting tradeoffs in efficiency, stability, and computational cost.
  • To get stronger separation among correct-but-reasoning-strong trajectories, the authors propose Groupwise Ranking Reward, which ranks verifier-passed trajectories for the same prompt in a single pass.
  • Experiments find that RLVR can worsen reasoning-answer inconsistency, while trajectory supervision reduces it.
  • Groupwise Ranking Reward delivers the best results overall, improving reliability-conditioned accuracy from 47.4% to 54.7% compared with RLVR.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) improves multimodal reasoning by rewarding verifiable final answers. Yet answer-correct trajectories may still rely on incomplete derivations, weak evidence, or statements that contradict their conclusions. This gap between answer correctness and reasoning validity, which we call reasoning-answer inconsistency, motivates trajectory supervision in multimodal RL. We compare two main approaches: reward models (RMs), and Generative Rewards (GRs). RMs are efficient and help early in training, but their gains weaken as the policy distribution shifts; GRs improve performance, but may give unstable rewards and computationally expensive. We therefore propose Groupwise Ranking Reward, which ranks verifier-passed trajectories for the same prompt in one pass and redistributes reward accordingly. Groupwise comparison better separates stronger and weaker correct trajectories with lower judge overhead than GRs. Experiments show that RLVR aggravates reasoning-answer inconsistency, while trajectory supervision alleviates it. Groupwise Ranking Reward performs best overall, improving reliability-conditioned accuracy from 47.4% to 54.7% over RLVR.