Query-Conditioned Evidential Keyframe Sampling for MLLM-Based Long-Form Video Understanding

arXiv cs.CV / 4/2/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key limitation of MLLMs for long-form video QA—limited context length and high compute—by focusing on efficient keyframe sampling.
  • It proposes an evidence-driven sampling objective using information bottleneck theory, maximizing conditional mutual information between selected frames and the user query to better capture evidential clues.
  • The method makes subset selection tractable by decomposing the optimization into independent frame-level scoring, avoiding inefficient combinatorial search.
  • A query-conditioned evidence scoring network is introduced and trained with a contrastive objective to estimate each frame’s evidential importance efficiently.
  • Experiments on long-form video understanding benchmarks show consistent improvements over prior sampling strategies under strict token budgets and better training efficiency.

Abstract

Multimodal Large Language Models (MLLMs) have shown strong performance on video question answering, but their application to long-form videos is constrained by limited context length and computational cost, making keyframe sampling essential. Existing approaches typically rely on semantic relevance or reinforcement learning, which either fail to capture evidential clues or suffer from inefficient combinatorial optimization. In this work, we propose an evidence-driven keyframe sampling framework grounded in information bottleneck theory. We formulate keyframe selection as maximizing the conditional mutual information between selected frames and the query, providing a principled objective that reflects each frame's contribution to answering the question. To make this objective tractable, we exploit its structure to derive a decomposed optimization that reduces subset selection to independent frame-level scoring. We further introduce a query-conditioned evidence scoring network trained with a contrastive objective to estimate evidential importance efficiently. Experiments on long-form video understanding benchmarks show that our method consistently outperforms prior sampling strategies under strict token budgets, while significantly improving training efficiency.