From Inheritance to Saturation: Disentangling the Evolution of Visual Redundancy for Architecture-Aware MLLM Inference Acceleration

arXiv cs.CV / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper argues that high computational costs in high-resolution MLLM inference largely stem from an explosion of visual tokens and resulting visual redundancy.
  • It identifies limitations of existing acceleration methods (e.g., token pruning and layer sparsity) due to “backbone dependency,” where gains on Vicuna/Mistral-like models (such as LLaVA) can degrade performance on different architectures like Qwen.
  • Using truncated matrix entropy, the authors propose a universal three-stage inference lifecycle that splits visual redundancy into architecture-independent Intrinsic Visual Redundancy (IVR) and architecture-dependent Secondary Saturation Redundancy (SSR).
  • Based on this decomposition, they introduce HalfV, which first reduces IVR with a unified pruning strategy and then adaptively manages SSR according to how it appears in a given backbone.
  • Experiments show HalfV improves the efficiency–performance trade-off across multiple backbones, including strong results on Qwen25-VL (96.8% performance with a 4.1× FLOPs speedup), and the code is publicly available on GitHub.

Abstract

High-resolution Multimodal Large Language Models (MLLMs) face prohibitive computational costs during inference due to the explosion of visual tokens. Existing acceleration strategies, such as token pruning or layer sparsity, suffer from severe "backbone dependency", performing well on Vicuna or Mistral architectures (e.g., LLaVA) but causing significant performance degradation when transferred to architectures like Qwen. To address this, we leverage truncated matrix entropy to uncover a universal three-stage inference lifecycle, decoupling visual redundancy into universal Intrinsic Visual Redundancy (IVR) and architecture-dependent Secondary Saturation Redundancy (SSR). Guided by this insight, we propose HalfV, a framework that first mitigates IVR via a unified pruning strategy and then adaptively handles SSR based on its specific manifestation. Experiments demonstrate that HalfV achieves superior efficiency-performance trade-offs across diverse backbones. Notably, on Qwen25-VL, it retains 96.8\% performance at a 4.1\times FLOPs speedup, significantly outperforming state-of-the-art baselines. Our code is available at https://github.com/civilizwa/HalfV.