Find, Fix, Reason: Context Repair for Video Reasoning

arXiv cs.CV / 4/20/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Find, Fix, Reason,” an observation-level context repair method for video reasoning that adds minimal missing spatiotemporal evidence to the original video without changing the question.
  • A frozen, tool-integrated teacher model detects what dependency is missing and outputs a targeted evidence patch (e.g., specific timestamps or regions), which the student model uses to re-answer and learn.
  • Training is done using a chosen-rollout scheme integrated into Group Relative Policy Optimization (GRPO), aiming to preserve on-policy exploration while improving it toward causally relevant directions.
  • The method introduces a Robust Improvement Reward (RIR) that jointly optimizes for answer validity and rationale alignment with the evidence provided by the teacher.
  • Experiments across related benchmarks reportedly show consistent accuracy improvements and strong generalization, and the authors plan to release a web page and source code.

Abstract

Reinforcement learning has advanced video reasoning in large multi-modal models, yet dominant pipelines either rely on on-policy self-exploration, which plateaus at the model's knowledge boundary, or hybrid replay that mixes policies and demands careful regularization. Dynamic context methods zoom into focused evidence but often require curated pretraining and two-stage tuning, and their context remains bounded by a small model's capability. In contrast, larger models excel at instruction following and multi-modal understanding, can supply richer context to smaller models, and rapidly zoom in on target regions via simple tools. Building on this capability, we introduce an observation-level intervention: a frozen, tool-integrated teacher identifies the missing spatiotemporal dependency and provides a minimal evidence patch (e.g., timestamps, regions etc.) from the original video while the question remains unchanged. The student answers again with the added context, and training updates with a chosen-rollout scheme integrated into Group Relative Policy Optimization (GRPO). We further propose a Robust Improvement Reward (RIR) that aligns optimization with two goals: outcome validity through correct answers and dependency alignment through rationales that reflect the cited evidence. Advantages are group-normalized across the batch, preserving on-policy exploration while directing it along causally meaningful directions with minimal changes to the training stack. Experiments on various related benchmarks show consistent accuracy gains and strong generalization. Web page and source code will be available at https://github.com/JethroJames/FFR.git.