ForestPrune: High-ratio Visual Token Compression for Video Multimodal Large Language Models via Spatial-Temporal Forest Modeling

arXiv cs.CV / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ForestPrune, a training-free visual token pruning method for video multimodal large language models (MLLMs) aimed at achieving higher token compression ratios than prior approaches.
  • ForestPrune builds spatial-temporal “token forests” across video frames using semantic, spatial, and temporal constraints, then derives globally optimal pruning decisions based on token-tree depth and node roles.
  • Experiments on LLaVA-Video and LLaVA-OneVision across multiple video benchmarks show strong accuracy retention despite aggressive token reduction, including results like keeping 95.8% average accuracy while pruning 90% of tokens for LLaVA-OneVision.
  • The method also reports efficiency gains over existing compression baselines, such as a +10.1% accuracy improvement on MLVU and an -81.4% reduction in pruning time versus FrameFusion for LLaVA-Video.

Abstract

Due to the great saving of computation and memory overhead, token compression has become a research hot-spot for MLLMs and achieved remarkable progress in image-language tasks. However, for the video, existing methods still fall short of high-ratio token compression. We attribute this shortcoming to the insufficient modeling of temporal and continual video content, and propose a novel and training-free token pruning method for video MLLMs, termed ForestPrune, which achieves effective and high-ratio pruning via Spatial-temporal Forest Modeling. In practice, ForestPrune construct token forests across video frames based on the semantic, spatial and temporal constraints, making an overall comprehension of videos. Afterwards, ForestPrune evaluates the importance of token trees and nodes based on tree depth and node roles, thereby obtaining a globally optimal pruning decision. To validate ForestPrune, we apply it to two representative video MLLMs, namely LLaVA-Video and LLaVA-OneVision, and conduct extensive experiments on a bunch of video benchmarks. The experimental results not only show the great effectiveness for video MLLMs, e.g., retaining 95.8% average accuracy while reducing 90% tokens for LLaVA-OneVision, but also show its superior performance and efficiency than the compared token compression methods, e.g., +10.1% accuracy on MLVU and -81.4% pruning time than FrameFusion on LLaVA-Video.