Mostly Text, Smart Visuals: Asymmetric Text-Visual Pruning for Large Vision-Language Models
arXiv cs.CL / 3/18/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates how to prune Large Vision-Language Models by decoupling visual and textual weights to account for modality-specific behavior.
- It finds that the textual pathway is more sensitive to pruning and should be calibrated with text tokens, while the visual pathway is highly redundant, allowing up to 50% sparsity.
- It introduces ATV-Pruning, which builds a calibration pool from all textual tokens and a subset of visual tokens and applies a layer-adaptive strategy to select important visual tokens.
- Extensive experiments on standard multimodal benchmarks demonstrate that ATV-Pruning outperforms state-of-the-art pruning methods.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
How I Built an AI SDR Agent That Finds Leads and Writes Personalized Cold Emails
Dev.to
Complete Guide: How To Make Money With Ai
Dev.to
I Analyzed My Portfolio with AI and Scored 53/100 — Here's How I Fixed It to 85+
Dev.to
The Demethylation
Dev.to