Understanding the Role of Hallucination in Reinforcement Post-Training of Multimodal Reasoning Models

arXiv cs.LG / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how reinforcement-learning (RL) post-training affects multimodal large language models (MLLMs), especially whether improvements truly reflect learning from visual information.
  • It introduces the Hallucination-as-Cue Framework, which uses hallucination-inductive, modality-specific corruptions to remove or replace key visual information so the model must rely on “hallucination” to answer.
  • Experiments across multiple multimodal reasoning benchmarks suggest hallucination plays a more important role in RL training dynamics than earlier research assumed.
  • The authors find that RL post-training can improve reasoning even under settings engineered to induce hallucination, sometimes exceeding standard (non-RL) training performance.
  • The results challenge prevailing assumptions about how MLLMs learn during RL post-training and motivate more modality-aware RL training designs.

Abstract

The recent success of reinforcement learning (RL) in large reasoning models has inspired the growing adoption of RL for post-training Multimodal Large Language Models (MLLMs) to enhance their visual reasoning capabilities. Although many studies have reported improved performance, it remains unclear whether RL training truly enables models to learn from visual information. In this work, we propose the Hallucination-as-Cue Framework, an analytical framework designed to investigate the effects of RL-based post-training on multimodal reasoning models from the perspective of model hallucination. Specifically, we introduce hallucination-inductive, modality-specific corruptions that remove or replace essential information required to derive correct answers, thereby forcing the model to reason by hallucination. By applying these corruptions during both training and evaluation, our framework provides a unique perspective for diagnosing RL training dynamics and understanding the intrinsic properties of datasets. Through extensive experiments and analyses across multiple multimodal reasoning benchmarks, we reveal that the role of model hallucination for RL-training is more significant than previously recognized. For instance, we find that RL post-training under purely hallucination-inductive settings can still significantly improve models' reasoning performance, and in some cases even outperform standard training. These findings challenge prevailing assumptions about MLLM reasoning training and motivate the development of more modality-aware RL-based training designs.