GIFT: Global Irreplaceability Frame Targeting for Efficient Video Understanding

arXiv cs.CV / 3/27/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces GIFT (Global Irreplaceability Frame Targeting) to reduce the heavy compute cost of dense-frame processing in Video Large Language Models while improving video understanding accuracy.
  • GIFT is training-free and selects frames by computing an intrinsic irreplaceability score using Directed Diversity to measure uniqueness conditioned on relevance, avoiding greedy local-optimum frame selection.
  • It uses a Budget-Aware Refinement strategy that first picks a high-irreplaceability core set, then incrementally expands temporal context as the frame budget increases.
  • Experiments report up to a 12.5% maximum average improvement on long-form video benchmarks for LLaVA-Video-7B versus uniform sampling.

Abstract

Video Large Language Models (VLMs) have achieved remarkable success in video understanding, but the significant computational cost from processing dense frames severely limits their practical application. Existing methods alleviate this by selecting keyframes, but their greedy decision-making, combined with a decoupled evaluation of relevance and diversity, often falls into local optima and results in erroneously selecting irrelevant noise frames. To address these challenges, we propose GIFT: Global Irreplaceability Frame Targeting, a novel training-free framework that selects frames by assessing their intrinsic irreplaceability. Specifically, we first introduce Directed Diversity to quantify a frame's uniqueness conditioned on relevance, which allows us to formulate a unified irreplaceability score. Subsequently, our Budget-Aware Refinement strategy employs a adaptive iterative process that first secures a core set of frames with the highest irreplaceability, and then shifts its priority to building crucial temporal context around these selections as the budget expands. Extensive experiments demonstrate that GIFT achieves a maximum average improvement of 12.5% across long-form video benchmarks on LLaVA-Video-7B compared to uniform sampling.