PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance

arXiv cs.CV / 5/4/2026

💬 OpinionModels & Research

Key Points

  • The paper attributes the inefficiency of recent Video LLMs to high redundancy in video content, which inflates the number of visual tokens and computational cost.
  • It proposes Prompt-guided Pooling LLaVA (PPLLaVA), which compresses visual tokens aggressively while preserving instruction-relevant semantics.
  • PPLLaVA includes a CLIP-based visual-prompt alignment module to focus on regions of interest, a prompt-guided pooling mechanism using convolution-style pooling, and a clip context extension module for long, complex visual dialogues.
  • Experiments show up to 18x token reduction and strong performance retention, with state-of-the-art results on multiple video understanding benchmarks (captioning, QA, and long-form reasoning).
  • The authors report a significant improvement in inference throughput and provide code on GitHub.

Abstract

In the past year, video-based large language models (Video LLMs) have achieved impressive progress, particularly in their ability to process long videos through extremely extended context lengths. However, this comes at the cost of significantly increased computational overhead due to the massive number of visual tokens, making efficiency a major bottleneck. In this paper, we identify the root of this inefficiency as the high redundancy in video content. To address this, we propose a novel pooling strategy that enables aggressive token compression while retaining instruction-relevant visual semantics. Our model, Prompt-guided Pooling LLaVA (PPLLaVA), introduces three key components: a CLIP-based visual-prompt alignment module that identifies regions of interest based on user instructions, a prompt-guided pooling mechanism that adaptively compresses the visual sequence using convolution-style pooling, and a clip context extension module tailored for processing long and complex prompts in visual dialogues. With up to 18x token reduction, PPLLaVA maintains strong performance across tasks, achieving state-of-the-art results on diverse video understanding benchmarks-ranging from image-to-video tasks such as captioning and QA to long-form video reasoning-while significantly improving inference throughput. Codes have been available at https://github.com/farewellthree/PPLLaVA.