Why and When Visual Token Pruning Fails? A Study on Relevant Visual Information Shift in MLLMs Decoding

arXiv cs.CV / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that existing visual token pruning methods work well for simple visual understanding but fail to generalize to complex visual reasoning tasks in multimodal LLM decoding.
  • It attributes this failure primarily to a “Relevant Visual Information Shift (RVIS)” phenomenon that changes which visual tokens are relevant as decoding progresses.
  • The authors propose DSTP (Decoding-stage Shift-aware Token Pruning), a training-free add-on that adjusts token pruning to track the shifting reasoning needs during the decoding stage.
  • Experiments show DSTP substantially reduces performance degradation on complex reasoning benchmarks and can also improve results on visual understanding benchmarks.
  • The approach is reported to work across multiple state-of-the-art architectures with minimal computational overhead, indicating broad applicability.

Abstract

Recently, visual token pruning has been studied to handle the vast number of visual tokens in Multimodal Large Language Models. However, we observe that while existing pruning methods perform reliably on simple visual understanding, they struggle to effectively generalize to complex visual reasoning tasks, a critical gap underexplored in previous studies. Through a systematic analysis, we identify Relevant Visual Information Shift (RVIS) during decoding as the primary failure driver. To address this, we propose Decoding-stage Shift-aware Token Pruning (DSTP), a training-free add-on framework that enables existing pruning methods to align visual tokens with shifting reasoning requirements during the decoding stage. Extensive experiments demonstrate that DSTP significantly mitigates performance degradation of pruning methods in complex reasoning tasks, while consistently yielding performance gains even across visual understanding benchmarks. Furthermore, DSTP demonstrates effectiveness across diverse state-of-the-art architectures, highlighting its generalizability and efficiency with minimal computational overhead.