AI Navigate

Language-Guided Token Compression with Reinforcement Learning in Large Vision-Language Models

arXiv cs.CV / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • TPRL is a reinforcement learning framework that learns adaptive visual token pruning trajectories in large vision-language models via language-guided sequential optimization tied to end-task performance.
  • The approach uses a self-supervised autoencoder to compress visual tokens into a compact state representation for efficient policy learning.
  • The pruning policy is initialized from demonstrations and fine-tuned with Proximal Policy Optimization to jointly optimize task accuracy and computational efficiency.
  • Experiments show TPRL can remove up to 66.7% of visual tokens and reduce FLOPs by up to 54.2% with only about 0.7% average accuracy loss.
  • Code for the method is released on GitHub, enabling use and replication by practitioners.

Abstract

Large Vision-Language Models (LVLMs) incur substantial inference costs due to the processing of a vast number of visual tokens. Existing methods typically struggle to model progressive visual token reduction as a multi-step decision process with sequential dependencies and often rely on hand-engineered scoring rules that lack adaptive optimization for complex reasoning trajectories. To overcome these limitations, we propose TPRL, a reinforcement learning framework that learns adaptive pruning trajectories through language-guided sequential optimization tied directly to end-task performance. We formulate visual token pruning as a sequential decision process with explicit state transitions and employ a self-supervised autoencoder to compress visual tokens into a compact state representation for efficient policy learning. The pruning policy is initialized through learning from demonstrations and subsequently fine-tuned using Proximal Policy Optimization (PPO) to jointly optimize task accuracy and computational efficiency. Our experimental results demonstrate that TPRL removes up to 66.7\% of visual tokens and achieves up to a 54.2\% reduction in FLOPs during inference while maintaining a near-lossless average accuracy drop of only 0.7\%. Code is released at \href{https://github.com/MagicVicCoder/TPRL}{\textcolor{mypink}{https://github.com/MagicVicCoder/TPRL}}.