Reducing Peak Memory Usage for Modern Multimodal Large Language Model Pipelines

arXiv cs.CV / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • Multimodal large language models store many vision tokens in the key-value (KV) cache during inference, which makes memory consumption a major bottleneck as models scale.
  • Prior KV-cache compression approaches often run only after all inputs are processed, leaving the prefill stage with very high peak memory usage.
  • The paper argues that MLLMs have structural regularities and representational redundancy that can be leveraged to limit memory growth throughout inference.
  • It proposes a sequential, structure-aware input-compression mechanism that compresses the KV cache during the prefill stage to enforce a fixed memory budget.
  • Experiments indicate substantial peak memory reduction with only minimal degradation in generative performance, improving the practicality of multimodal inference.

Abstract

Multimodal large language models (MLLMs) have recently demonstrated strong capabilities in understanding and generating responses from diverse visual inputs, including high-resolution images and long video sequences. As these models scale to richer visual representations, inference increasingly relies on storing large numbers of vision tokens in the key-value (KV) cache, making memory consumption a central bottleneck. Existing methods address this issue by identifying redundancy in vision tokens and compressing the cache, but such compression is typically applied only after all inputs are processed, resulting in high peak memory usage during the prefill stage. In this work, we show that MLLMs exhibit inherent structural regularities and representational redundancy that can be exploited to control memory growth throughout inference. Based on this insight, we propose a sequential input-compression mechanism that enforces a fixed memory budget by performing structure-aware key-value cache compression during the prefill process. This approach substantially reduces peak memory usage while maintaining generative performance with only minimal degradation, enabling more practical and memory-efficient multimodal inference.