HybridKV: Hybrid KV Cache Compression for Efficient Multimodal Large Language Model Inference

arXiv cs.AI / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multimodal LLM inference is slowed and memory-heavy because KV caches grow rapidly with visual tokens and must stay resident on GPU memory during decoding.
  • It critiques existing KV compression methods that mainly focus on fixed budget allocation, noting they do not account for heterogeneous attention-head behaviors that benefit from different compression strategies.
  • HybridKV is proposed as a three-stage hybrid framework that first classifies attention heads as static vs dynamic, then allocates KV budgets hierarchically top-down, and finally applies text-prior pruning for static heads and chunk-wise retrieval for dynamic heads.
  • On 11 multimodal benchmarks using Qwen2.5-VL-7B, HybridKV cuts KV cache memory by up to 7.9× and speeds decoding by 1.52× while maintaining performance with minimal drop.

Abstract

Multimodal Large Language Models (MLLMs) have advanced unified reasoning over text, images, and videos, but their inference is hindered by the rapid growth of key-value (KV) caches. Each visual input expands into thousands of tokens, causing caches to scale linearly with context length and remain resident in GPU memory throughout decoding, which leads to prohibitive memory overhead and latency even on high-end GPUs. A common solution is to compress caches under a fixed allocated budget at different granularities: token-level uniformly discards less important tokens, layer-level varies retention across layers, and head-level redistributes budgets across heads. Yet these approaches stop at allocation and overlook the heterogeneous behaviors of attention heads that require distinct compression strategies. We propose HybridKV, a hybrid KV cache compression framework that integrates complementary strategies in three stages: heads are first classified into static or dynamic types using text-centric attention; then a top-down budget allocation scheme hierarchically assigns KV budgets; finally, static heads are compressed by text-prior pruning and dynamic heads by chunk-wise retrieval. Experiments on 11 multimodal benchmarks with Qwen2.5-VL-7B show that HybridKV reduces KV cache memory by up to 7.9\times and achieves 1.52\times faster decoding, with almost no performance drop or even higher relative to the full-cache MLLM.