Video Active Perception: Effective Inference-Time Long-Form Video Understanding with Vision-Language Models
arXiv cs.CV / 5/5/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Video Active Perception (VAP), a training-free method that improves long-form video question answering with vision-language models by selecting frames more effectively than uniform sampling.
- VAP reframes keyframe selection as an “active perception” data-acquisition problem, using a lightweight text-conditioned video generation model to encode prior world knowledge and guide what information to request.
- Experiments report state-of-the-art zero-shot performance on multiple long-form/reasoning video QA benchmarks (EgoSchema, NExT-QA, ActivityNet-QA, IntentQA, and CLEVRER).
- The method improves frame efficiency by up to 5.6× in frames-per-question compared with baselines using GPT-4o, Gemini 1.5 Pro, and LLaVA-OV, while also showing stronger reasoning and question-relevant keyframe selection.
- Overall, the results suggest active perception can make video QA both more effective and more computationally efficient by focusing inference on informative frames.
Related Articles

Singapore's Fraud Frontier: Why AI Scam Detection Demands Regulatory Precision
Dev.to

Meta will use AI to analyze height and bone structure to identify if users are underage
TechCrunch

Google, Microsoft, and xAI will allow the US government to review their new AI models
The Verge

How AI is Changing the Way We Code in 2026: The Shift from Syntax to Strategy
Dev.to

ElevenLabs lists BlackRock, Jamie Foxx and Longoria as new investors
TechCrunch