Structural Pruning of Large Vision Language Models: A Comprehensive Study on Pruning Dynamics, Recovery, and Data Efficiency

arXiv cs.CL / 4/28/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The study investigates compressing existing large vision-language models (LVLMs) via structured pruning on the language-model backbone, followed by lightweight recovery training.
  • It compares layerwise vs. widthwise pruning and finds widthwise pruning generally preserves performance better in low-resource settings with limited compute or finetuning data.
  • Recovery training is analyzed under data scarcity, showing that effective recovery is possible with only 5% of the original data while retaining over 95% of baseline performance.
  • For small compression levels, finetuning only the multimodal projector is sufficient, and combining supervised finetuning with hidden-state distillation produces the best recovery across pruning strengths.
  • Experiments across three representative LVLM families (3B–7B parameters) provide practical guidance for deploying LVLMs on edge devices without extensive computation or abundant data.

Abstract

While Large Vision Language Models (LVLMs) demonstrate impressive capabilities, their substantial computational and memory requirements pose deployment challenges on resource-constrained edge devices. Current parameter reduction techniques primarily involve training LVLMs from small language models, but these methods offer limited flexibility and remain computationally intensive. We study a complementary route: compressing existing LVLMs by applying structured pruning to the language model backbone, followed by lightweight recovery training. Specifically, we investigate two structural pruning paradigms: layerwise and widthwise pruning, and pair them with supervised finetuning and knowledge distillation on logits and hidden states. Additionally, we assess the feasibility of conducting recovery training with only a small fraction of the available data. Our results show that widthwise pruning generally maintains better performance in low-resource scenarios, where computational resources are limited or there is insufficient finetuning data. As for the recovery training, finetuning only the multimodal projector is sufficient at small compression levels. Furthermore, a combination of supervised finetuning and hidden-state distillation yields optimal recovery across various pruning levels. Notably, effective recovery can be achieved using just 5% of the original data, while retaining over 95% of the original performance. Through empirical study on three representative LVLM families ranging from 3B to 7B parameters, this study offers actionable insights for practitioners to compress LVLMs without extensive computation resources or sufficient data. The code base is available at https://github.com/YiranHuangIrene/VLMCompression.git.