Efficient Inference of Large Vision Language Models

arXiv cs.LG / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper explains that deploying Large Vision Language Models (LVLMs) is bottlenecked by high compute costs, especially the quadratic attention cost driven by the large number of visual tokens from high-resolution inputs.
  • It provides a survey-style taxonomy of state-of-the-art LVLM inference acceleration methods, organizing them into four dimensions: visual token compression, memory management and serving, efficient model architecture, and advanced decoding strategies.
  • The authors critically assess the limitations and trade-offs of existing optimization approaches, rather than presenting them as universally applicable.
  • The work highlights open research problems intended to guide future efforts in building more efficient multimodal systems for real-world deployment.

Abstract

Although Large Vision Language Models (LVLMs) have demonstrated impressive multimodal reasoning capabilities, their scalability and deployment are constrained by massive computational requirements. In particular, the massive amount of visual tokens from high-resolution input data aggravates the situation due to the quadratic complexity of attention mechanisms. To address these issues, the research community has developed several optimization frameworks. This paper presents a comprehensive survey of the current state-of-the-art techniques for accelerating LVLM inference. We introduce a systematic taxonomy that categorizes existing optimization frameworks into four primary dimensions: visual token compression, memory management and serving, efficient architectural design, and advanced decoding strategies. Furthermore, we critically examine the limitations of these current methodologies and identify critical open problems to inspire future research directions in efficient multimodal systems.