Low-Rank Compression of Pretrained Models via Randomized Subspace Iteration

arXiv cs.AI / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles efficient compression of large pretrained models by using low-rank weight decomposition while avoiding the high cost of exact SVD.
  • It links low-rank approximation spectral error to downstream predictive performance by analyzing how softmax class-probability deviations are governed by the compressed-weight error.
  • The authors argue randomized SVD (RSVD) can produce poor approximations when pretrained models have slowly decaying singular value spectra, which is common in practice.
  • They propose randomized subspace iteration (RSI) with multiple power iterations to improve spectral separation and achieve controllable approximation quality.
  • Experiments on convolutional networks and transformer architectures show RSI delivers near-optimal approximation quality and better predictive accuracy than RSVD under aggressive compression settings.

Abstract

The massive scale of pretrained models has made efficient compression essential for practical deployment. Low-rank decomposition based on the singular value decomposition (SVD) provides a principled approach for model reduction, but its exact computation is expensive for large weight matrices. Randomized alternatives such as randomized SVD (RSVD) improve efficiency, yet they can suffer from poor approximation quality when the singular value spectrum decays slowly, a regime commonly observed in modern pretrained models. In this work, we address this limitation from both theoretical and empirical perspectives. First, we establish a connection between low-rank approximation error and predictive performance by analyzing softmax perturbations, showing that deviations in class probabilities are controlled by the spectral error of the compressed weights. Second, we demonstrate that RSVD is inadequate, and we propose randomized subspace iteration (RSI) as a more effective alternative. By incorporating multiple power iterations, RSI improves spectral separation and provides a controllable mechanism for enhancing approximation quality. We evaluate our approach on both convolutional networks and transformer-based architectures. Our results show that RSI achieves near-optimal approximation quality while outperforming RSVD in predictive accuracy under aggressive compression, enabling efficient model compression.

Low-Rank Compression of Pretrained Models via Randomized Subspace Iteration | AI Navigate