AI Navigate

Mostly Text, Smart Visuals: Asymmetric Text-Visual Pruning for Large Vision-Language Models

arXiv cs.CL / 3/18/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates how to prune Large Vision-Language Models by decoupling visual and textual weights to account for modality-specific behavior.
  • It finds that the textual pathway is more sensitive to pruning and should be calibrated with text tokens, while the visual pathway is highly redundant, allowing up to 50% sparsity.
  • It introduces ATV-Pruning, which builds a calibration pool from all textual tokens and a subset of visual tokens and applies a layer-adaptive strategy to select important visual tokens.
  • Extensive experiments on standard multimodal benchmarks demonstrate that ATV-Pruning outperforms state-of-the-art pruning methods.

Abstract

Network pruning is an effective technique for enabling lightweight Large Vision-Language Models (LVLMs), which primarily incorporates both weights and activations into the importance metric. However, existing efforts typically process calibration data from different modalities in a unified manner, overlooking modality-specific behaviors. This raises a critical challenge: how to address the divergent behaviors of textual and visual tokens for accurate pruning of LVLMs. To this end, we systematically investigate the sensitivity of visual and textual tokens to the pruning operation by decoupling their corresponding weights, revealing that: (i) the textual pathway should be calibrated via text tokens, since it exhibits higher sensitivity than the visual pathway; (ii) the visual pathway exhibits high redundancy, permitting even 50% sparsity. Motivated by these insights, we propose a simple yet effective Asymmetric Text-Visual Weight Pruning method for LVLMs, dubbed ATV-Pruning, which establishes the importance metric for accurate weight pruning by selecting the informative tokens from both textual and visual pathways. Specifically, ATV-Pruning integrates two primary innovations: first, a calibration pool is adaptively constructed by drawing on all textual tokens and a subset of visual tokens; second, we devise a layer-adaptive selection strategy to yield important visual tokens. Finally, extensive experiments across standard multimodal benchmarks verify the superiority of our ATV-Pruning over state-of-the-art methods.