HAWK: Head Importance-Aware Visual Token Pruning in Multimodal Models

arXiv cs.CV / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces HAWK, a training-free visual token pruning method for multimodal LLMs that targets inference-time and compute overhead caused by large numbers of visual tokens.
  • It argues that attention heads contribute unevenly to visual understanding, using head importance weights and text-guided attention to estimate which visual tokens are most task-relevant.
  • HAWK retains crucial visual information while removing redundant tokens, and is designed to work seamlessly across different MLLMs without retraining.
  • Experiments on multiple vision-language benchmarks report state-of-the-art accuracy, including results on Qwen2.5-VL where it preserves 96.0% accuracy while pruning 80.2% of visual tokens.
  • The approach also reduces end-to-end latency (to 74.4% of the original) and lowers GPU memory usage, with code released on GitHub.

Abstract

In multimodal large language models (MLLMs), the surge of visual tokens significantly increases the inference time and computational overhead, making them impractical for real-time or resource-constrained applications. Visual token pruning is a promising strategy for reducing the cost of MLLM inference by removing redundant visual tokens. Existing research usually assumes that all attention heads contribute equally to the visual interpretation. However, our study reveals that different heads may capture distinct visual semantics and inherently play distinct roles in visual processing. In light of this observation, we propose HAWK, a head importance-aware visual token pruning method that perceives the varying importance of attention heads in visual tasks to maximize the retention of crucial tokens. By leveraging head importance weights and text-guided attention to assess visual token significance, HAWK effectively retains task-relevant visual tokens while removing redundant ones. The proposed HAWK is entirely training-free and can be seamlessly applied to various MLLMs. Extensive experiments on multiple mainstream vision-language benchmarks demonstrate that HAWK achieves state-of-the-art accuracy. When applied to Qwen2.5-VL, HAWK retains 96.0% of the original accuracy after pruning 80.2% of the visual tokens. Additionally, it reduces end-to-end latency to 74.4% of the original and further decreases GPU memory usage across the tested models. The code is available at https://github.com/peppery77/HAWK.git.

HAWK: Head Importance-Aware Visual Token Pruning in Multimodal Models | AI Navigate