Reinforcing Structured Chain-of-Thought for Video Understanding

arXiv cs.AI / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses shortcomings in multimodal large language model video understanding, including reasoning “thinking drift” and weak temporal comprehension despite prior RL methods such as GRPO.
  • It proposes Summary-Driven Reinforcement Learning (SDRL), a single-stage RL approach that removes the need for supervised fine-tuning with costly Chain-of-Thought annotations.
  • SDRL uses a structured reasoning format—Summarize → Think → Answer—and adds two self-supervised signals into the GRPO objective: Consistency of Vision Knowledge (CVK) for factual grounding and Dynamic Variety of Reasoning (DVR) for exploration.
  • The method supervises both the final answers and intermediate reasoning behavior while aiming to improve generalization by avoiding fixed reasoning paths and reducing induced bias.
  • Experiments report state-of-the-art results on seven public VideoQA datasets, indicating strong improvements in video question answering performance.

Abstract

Multi-modal Large Language Models (MLLMs) show promise in video understanding. However, their reasoning often suffers from thinking drift and weak temporal comprehension, even when enhanced by Reinforcement Learning (RL) techniques like Group Relative Policy Optimization (GRPO). Moreover, existing RL methods usually depend on Supervised Fine-Tuning (SFT), which requires costly Chain-of-Thought (CoT) annotation and multi-stage training, and enforces fixed reasoning paths, limiting MLLMs' ability to generalize and potentially inducing bias. To overcome these limitations, we introduce Summary-Driven Reinforcement Learning (SDRL), a novel single-stage RL framework that obviates the need for SFT by utilizing a Structured CoT format: Summarize -> Think -> Answer. SDRL introduces two self-supervised mechanisms integrated into the GRPO objective: 1) Consistency of Vision Knowledge (CVK) enforces factual grounding by reducing KL divergence among generated summaries; and 2) Dynamic Variety of Reasoning (DVR) promotes exploration by dynamically modulating thinking diversity based on group accuracy. This novel integration effectively balances alignment and exploration, supervising both the final answer and the reasoning process. Our method achieves state-of-the-art performance on seven public VideoQA datasets.