Dynamic Token Compression for Efficient Video Understanding through Reinforcement Learning

arXiv cs.CV / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SCORE (Surprise-augmented token COmpression via Reinforcement learning), a framework that learns an adaptive video-token compression policy for multimodal LLM video understanding rather than using fixed heuristic compression.
  • SCORE uses a lightweight policy network with a surprise-augmented state representation that incorporates inter-frame residuals to better capture temporal dynamics and motion saliency.
  • Training is done with group-wise reinforcement learning using a split-advantage estimator, plus a two-stage curriculum that transfers from static pseudo-videos to real dynamic videos for stability.
  • Experiments on multiple video understanding benchmarks show SCORE outperforms existing compression baselines and can deliver a 16x prefill speedup while retaining ~99.5% performance at a 10% token retention ratio.
  • The work targets two key problems in long-form video understanding—high computational cost from redundant visual tokens and performance degradation from “context rot.”

Abstract

Multimodal Large Language Models have demonstrated remarkable capabilities in video understanding, yet face prohibitive computational costs and performance degradation from ''context rot'' due to massive visual token redundancy. Existing compression strategies typically rely on heuristics or fixed transformations that are often decoupled from the downstream task objectives, limiting their adaptability and effectiveness. To address this, we propose SCORE (Surprise-augmented token COmpression via REinforcement learning), a unified framework that learns an adaptive token compression policy. SCORE introduces a lightweight policy network conditioned on a surprise-augmented state representation that incorporates inter-frame residuals to explicitly capture temporal dynamics and motion saliency. We optimize this policy using a group-wise reinforcement learning scheme with a split-advantage estimator, stabilized by a two-stage curriculum transferring from static pseudo-videos to real dynamic videos. Extensive experiments on diverse video understanding benchmarks demonstrate that SCORE significantly outperforms state-of-the-art baselines. Notably, SCORE achieves a 16x prefill speedup while preserving 99.5% of original performance at a 10% retention ratio, offering a scalable solution for efficient long-form video understanding.