AI Navigate

Prune Redundancy, Preserve Essence: Vision Token Compression in VLMs via Synergistic Importance-Diversity

arXiv cs.CV / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • Vision-language models (VLMs) suffer from computational inefficiencies due to redundant visual tokens, hindering performance.
  • The proposed method, PruneSID, introduces a training-free two-stage approach combining Principal Semantic Components Analysis and Intra-group Non-Maximum Suppression to effectively prune redundant visual tokens while preserving semantic richness.
  • PruneSID dynamically adjusts compression ratios based on image complexity to preserve information across diverse scenes, achieving state-of-the-art accuracy with significant token reduction.
  • Experimental results show that PruneSID outperforms previous methods in accuracy and speed, retaining as few as 5.6% tokens with minimal loss in performance and speeding up token prefilling by 7.8 times.
  • The framework generalizes well across different VLM architectures and supports both image and video modalities, demonstrating broad applicability and efficiency gains in vision-language tasks.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09480 (cs)
[Submitted on 10 Mar 2026]

Title:Prune Redundancy, Preserve Essence: Vision Token Compression in VLMs via Synergistic Importance-Diversity

View a PDF of the paper titled Prune Redundancy, Preserve Essence: Vision Token Compression in VLMs via Synergistic Importance-Diversity, by Zhengyao Fang and 5 other authors
View PDF HTML (experimental)
Abstract:Vision-language models (VLMs) face significant computational inefficiencies caused by excessive generation of visual tokens. While prior work shows that a large fraction of visual tokens are redundant, existing compression methods struggle to balance importance preservation and information diversity. To address this, we propose PruneSID, a training-free Synergistic Importance-Diversity approach featuring a two-stage pipeline: (1) Principal Semantic Components Analysis (PSCA) for clustering tokens into semantically coherent groups, ensuring comprehensive concept coverage, and (2) Intra-group Non-Maximum Suppression (NMS) for pruning redundant tokens while preserving key representative tokens within each group. Additionally, PruneSID incorporates an information-aware dynamic compression ratio mechanism that optimizes token compression rates based on image complexity, enabling more effective average information preservation across diverse scenes. Extensive experiments demonstrate state-of-the-art performance, achieving 96.3% accuracy on LLaVA-1.5 with only 11.1% token retention, and 92.8% accuracy at extreme compression rates (5.6%) on LLaVA-NeXT, outperforming prior methods by 2.5% with 7.8 $\times$ faster prefilling speed compared to the original model. Our framework generalizes across diverse VLMs and both image and video modalities, showcasing strong cross-modal versatility. Code is available at this https URL}{this https URL.
Comments:
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09480 [cs.CV]
  (or arXiv:2603.09480v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09480
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Zhengyao Fang [view email]
[v1] Tue, 10 Mar 2026 10:31:58 UTC (3,421 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Prune Redundancy, Preserve Essence: Vision Token Compression in VLMs via Synergistic Importance-Diversity, by Zhengyao Fang and 5 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.