WSVD: Weighted Low-Rank Approximation for Fast and Efficient Execution of Low-Precision Vision-Language Models

arXiv cs.LG / 4/6/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces WSVD (Weighted Low-Rank Approximation), a finer-grained SVD computational pattern designed to reduce real execution latency for Vision-Language Models (VLMs), where prior SVD variants struggled to deliver substantial speedups in practice.
  • WSVD adaptively weights the relative importance of weight elements during the SVD process to better preserve accuracy while compressing low-rank representations.
  • The method further extends WSVD with quantization of both weights and activations, aiming to increase efficiency without degrading task quality.
  • Experiments report over 1.8× decoding speedup compared with other approaches while maintaining accuracy.
  • The authors open-source the implementation at the provided GitHub repository to enable replication and adoption.

Abstract

Singular Value Decomposition (SVD) has become an important technique for reducing the computational burden of Vision Language Models (VLMs), which play a central role in tasks such as image captioning and visual question answering. Although multiple prior works have proposed efficient SVD variants to enable low-rank operations, we find that in practice it remains difficult to achieve substantial latency reduction during model execution. To address this limitation, we introduce a new computational pattern and apply SVD at a finer granularity, enabling real and measurable improvements in execution latency. Furthermore, recognizing that weight elements differ in their relative importance, we adaptively allocate relative importance to each element during SVD process to better preserve accuracy, then extend this framework with quantization applied to both weights and activations, resulting in a highly efficient VLM. Collectively, we introduce~\textit{Weighted SVD} (WSVD), which outperforms other approaches by achieving over 1.8\times decoding speedup while preserving accuracy. We open source our code at: \href{https://github.com/SAI-Lab-NYU/WSVD}{\texttt{https://github.com/SAI-Lab-NYU/WSVD}