CLASP: Class-Adaptive Layer Fusion and Dual-Stage Pruning for Multimodal Large Language Models

arXiv cs.CV / 4/15/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces CLASP, a plug-and-play token reduction framework to cut the heavy compute cost of multimodal LLMs caused by redundant visual tokens.
  • CLASP performs class-adaptive, multi-layer visual feature fusion to build category-specific representations that are conditioned on prompts/instructions.
  • It uses dual-stage pruning by splitting the token budget into attention-salient pivot tokens (relevance) and redundancy-aware completion tokens (coverage).
  • Experiments on multiple benchmarks show CLASP improves performance over existing pruning approaches across varying pruning ratios and MLLM architectures.
  • The authors indicate the code will be released publicly at the provided GitHub link, enabling adoption and evaluation by others.

Abstract

Multimodal Large Language Models (MLLMs) suffer from substantial computational overhead due to the high redundancy in visual token sequences. Existing approaches typically address this issue using single-layer Vision Transformer (ViT) features and static pruning strategies. However, such fixed configurations are often brittle under diverse instructions. To overcome these limitations, we propose CLASP, a plug-and-play token reduction framework based on class-adaptive layer fusion and dual-stage pruning. Specifically, CLASP first constructs category-specific visual representations through multi-layer vision feature fusion. It then performs dual-stage pruning, allocating the token budget between attention-salient pivot tokens for relevance and redundancy-aware completion tokens for coverage. Through class-adaptive pruning, CLASP enables prompt-conditioned feature fusion and budget allocation, allowing aggressive yet robust visual token reduction. Extensive experiments demonstrate that CLASP consistently outperforms existing methods across a wide range of benchmarks, pruning ratios, and MLLM architectures. Code will be available at https://github.com/Yunkaidang/CLASP.