AI Navigate

When Thinking Hurts: Mitigating Visual Forgetting in Video Reasoning via Frame Repetition

arXiv cs.CV / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • FrameRepeat introduces an automated framework that helps Video-LLMs reinforce the most informative frames during reasoning to combat visual anchor drift.
  • The approach uses a lightweight frame scoring network and a training strategy called Add-One-In (AOI) to derive supervision signals from MLLM output probabilities.
  • AOI supervision trains a frame scorer that guides when and which frames should be repeated to strengthen visual cues.
  • The authors demonstrate the method's effectiveness and generalizability across multiple models and datasets, offering improvements without prohibitive training costs.
  • FrameRepeat aims to improve the reliability of visual inputs in long-horizon video reasoning, addressing a key limitation of prior CoT-based video QA methods.

Abstract

Recently, Multimodal Large Language Models (MLLMs) have demonstrated significant potential in complex visual tasks through the integration of Chain-of-Thought (CoT) reasoning. However, in Video Question Answering, extended thinking processes do not consistently yield performance gains and may even lead to degradation due to ``visual anchor drifting'', where models increasingly rely on self-generated text, sidelining visual inputs and causing hallucinations. While existing mitigations typically introduce specific mechanisms for the model to re-attend to visual inputs during inference, these approaches often incur prohibitive training costs and suffer from poor generalizability across different architectures. To address this, we propose FrameRepeat, an automated enhancement framework which features a lightweight repeat scoring module that enables Video-LLMs to autonomously identify which frames should be reinforced. We introduce a novel training strategy, Add-One-In (AOI), that uses MLLM output probabilities to generate supervision signals representing repeat gain. This can be used to train a frame scoring network, which guides the frame repetition behavior. Experimental results across multiple models and datasets demonstrate that FrameRepeat is both effective and generalizable in strengthening important visual cues during the reasoning process.