AI Navigate

HORNet: Task-Guided Frame Selection for Video Question Answering with Vision-Language Models

arXiv cs.CV / 3/20/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • HORNet is a lightweight frame-selection policy trained with Group Relative Policy Optimization (GRPO) to choose the frames a frozen vision-language model needs for reliable VQA performance.
  • It achieves dramatic efficiency gains by reducing input frames by up to 99% and VLM processing time by up to 93%, while boosting answer quality on short-form benchmarks (+1.7% F1 on MSVD-QA) and temporal reasoning tasks (+7.3 points on NExT-QA).
  • The method formalizes Select Any Frames (SAF) and generalizes better out-of-distribution than supervised or PPO baselines, with cross-model transfer yielding an additional 8.5% relative gain when paired with a stronger VLM.
  • Evaluated on six benchmarks (341,877 QA pairs, 114.2 hours of video) and with publicly available code, HORNet demonstrates that choosing what the model sees is a practical complement to improving what it generates.

Abstract

Video question answering (VQA) with vision-language models (VLMs) depends critically on which frames are selected from the input video, yet most systems rely on uniform or heuristic sampling that cannot be optimized for downstream answering quality. We introduce \textbf{HORNet}, a lightweight frame selection policy trained with Group Relative Policy Optimization (GRPO) to learn which frames a frozen VLM needs to answer questions correctly. With fewer than 1M trainable parameters, HORNet reduces input frames by up to 99\% and VLM processing time by up to 93\%, while improving answer quality on short-form benchmarks (+1.7\% F1 on MSVD-QA) and achieving strong performance on temporal reasoning tasks (+7.3 points over uniform sampling on NExT-QA). We formalize this as Select Any Frames (SAF), a task that decouples visual input curation from VLM reasoning, and show that GRPO-trained selection generalizes better out-of-distribution than supervised and PPO alternatives. HORNet's policy further transfers across VLM answerers without retraining, yielding an additional 8.5\% relative gain when paired with a stronger model. Evaluated across six benchmarks spanning 341,877 QA pairs and 114.2 hours of video, our results demonstrate that optimizing \emph{what} a VLM sees is a practical and complementary alternative to optimizing what it generates while improving efficiency. Code is available at https://github.com/ostadabbas/HORNet.