HieraVid: Hierarchical Token Pruning for Fast Video Large Language Models

arXiv cs.CV / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces HieraVid, a hierarchical and dynamic token-pruning framework aimed at reducing the heavy compute cost of VideoLLMs caused by massive input token counts.
  • HieraVid leverages assumed video segment-frame structure and the unidirectional propagation of multimodal information in LLMs to prune at three levels: segment-level temporal/spatial merging, frame-level joint pruning within segments, and layer-level gradual redundancy reduction.
  • Experiments on four standard video understanding benchmarks show HieraVid can retain only 30% of tokens while achieving new state-of-the-art performance.
  • The approach preserves most of the baseline quality, maintaining over 98% and 99% of performance relative to LLaVA-Video-7B and LLaVA-OneVision-7B, respectively, when heavily pruning.
  • Overall, the work suggests that exploiting the hierarchical structure of video inputs and internal model information flow can enable faster VideoLLM deployment without major accuracy loss.

Abstract

Video Large Language Models (VideoLLMs) have demonstrated impressive capabilities in video understanding, yet the massive number of input video tokens incurs a significant computational burden for deployment. Existing methods mainly prune video tokens at input level while neglecting the inherent information structure embedded in videos and large language models (LLMs). To address this, we propose HieraVid, a hierarchical pruning framework that progressively and dynamically reduces visual redundancy. Based on two observations that videos possess the segment-frame structure and LLMs internally propagate multi-modal information unidirectionally, we decompose pruning into three levels: 1) segment-level, where video tokens are first temporally segmented and spatially merged; 2) frame-level, where similar frames within the same segment are jointly pruned to preserve diversity; 3) layer-level, redundancy gradually shrinks as LLM layer increases w/o compromising performance. We conduct extensive experiments on four widely used video understanding benchmarks to comprehensively evaluate the effectiveness of HieraVid. Remarkably, with only 30% of tokens retained, HieraVid achieves new state-of-the-art performance, while maintaining over 98% and 99% of the performance of LLaVA-Video-7B and LLaVA-OneVision-7B, respectively.