Video Active Perception: Effective Inference-Time Long-Form Video Understanding with Vision-Language Models

arXiv cs.CV / 5/5/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Video Active Perception (VAP), a training-free method that improves long-form video question answering with vision-language models by selecting frames more effectively than uniform sampling.
  • VAP reframes keyframe selection as an “active perception” data-acquisition problem, using a lightweight text-conditioned video generation model to encode prior world knowledge and guide what information to request.
  • Experiments report state-of-the-art zero-shot performance on multiple long-form/reasoning video QA benchmarks (EgoSchema, NExT-QA, ActivityNet-QA, IntentQA, and CLEVRER).
  • The method improves frame efficiency by up to 5.6× in frames-per-question compared with baselines using GPT-4o, Gemini 1.5 Pro, and LLaVA-OV, while also showing stronger reasoning and question-relevant keyframe selection.
  • Overall, the results suggest active perception can make video QA both more effective and more computationally efficient by focusing inference on informative frames.

Abstract

Large vision-language models (VLMs) have advanced multimodal tasks such as video question answering (QA). However, VLMs face the challenge of selecting frames effectively and efficiently, as standard uniform sampling is expensive and performance may plateau. Inspired by active perception theory, which posits that models gain information by acquiring data that differs from their expectations, we introduce Video Active Perception (VAP), a training-free method to enhance long-form video QA using VLMs. Our approach treats keyframe selection as data acquisition in active perception and leverages a lightweight text-conditioned video generation model to represent prior world knowledge. Empirically, VAP achieves state-of-the-art zero-shot results on long-form or reasoning video QA datasets such as EgoSchema, NExT-QA, ActivityNet-QA, IntentQA, and CLEVRER, achieving an increase of up to 5.6 x frame efficiency by frames per question over standard GPT-4o, Gemini 1.5 Pro, and LLaVA-OV. Moreover, VAP shows stronger reasoning abilities than previous methods and effectively selects keyframes relevant to questions. These findings highlight the potential of leveraging active perception to improve the frame effectiveness and efficiency of long-form video QA.