Mostly Text, Smart Visuals: Asymmetric Text-Visual Pruning for Large Vision-Language Models
arXiv cs.CL / 3/18/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates how to prune Large Vision-Language Models by decoupling visual and textual weights to account for modality-specific behavior.
- It finds that the textual pathway is more sensitive to pruning and should be calibrated with text tokens, while the visual pathway is highly redundant, allowing up to 50% sparsity.
- It introduces ATV-Pruning, which builds a calibration pool from all textual tokens and a subset of visual tokens and applies a layer-adaptive strategy to select important visual tokens.
- Extensive experiments on standard multimodal benchmarks demonstrate that ATV-Pruning outperforms state-of-the-art pruning methods.
Related Articles

Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to

The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to

YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to