Collaborative Multi-Mode Pruning for Vision-Language Models

arXiv cs.CV / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Collaborative Multi-Mode Pruning (CoMP) to compress vision-language models more effectively on resource-constrained devices by jointly pruning both parameters and tokens rather than using a single pruning mode.
  • It introduces a Collaborative Importance Metric (CIM) that models the mutual interference between parameters and tokens, aiming to improve parameter importance estimation without harming token importance scoring when components are removed.
  • It develops a Multi-Mode Pruning Strategy (MPS) that breaks pruning into stages and adaptively shifts among pruning modes based on estimated pruning costs, historical cost, and random exploration to avoid unstable behavior and local optima.
  • Experiments across multiple vision-language tasks and models show CoMP maintains stronger performance under high pruning ratios compared with state-of-the-art single-mode approaches.
  • The authors provide an open-source implementation of CoMP via a public GitHub repository.

Abstract

Vision-Language Models (VLMs) have advanced rapidly within the unified Transformer architecture, yet their deployment on resource-constrained devices remains challenging due to high computational complexity. While pruning has emerged as an effective technique for compressing VLMs, existing approaches predominantly focus on a single mode by pruning either parameters or tokens, neglecting fully exploring the inherent redundancy in each mode, which leads to substantial performance degradation at high pruning ratios. To address the above limitations, we propose Collaborative Multi-Mode Pruning (CoMP), a novel framework tailored for VLMs by performing joint parameter and token pruning. Specifically, we first design a Collaborative Importance Metric (CIM) that investigates the mutual interference between the coupled parameters and tokens. It incorporates distinct significance of tokens into the computation of parameter importance scores, while simultaneously mitigating the affect of pruned parameters on token importance scores. Moreover, we develop a Multi-Mode Pruning Strategy (MPS) that decomposes the overall pruning process into a sequence of pruning stages, while in each stage we estimate the priory of different pruning modes based on their pruning cost and adaptively shift to the optimal one. Additionally, MPS integrates the historical cost and random exploration, in order to achieve a stable pruning process and avoid local optimum. Extensive experiments across various vision-language tasks and models demonstrate that our method effectively promotes the performance under high pruning ratios by comparing to the state-of-the-art approaches. The source code is available at https://github.com/Wuzimeng/CoMP.git.