AA-SVD : Anchored and Adaptive SVD for Large Language Model Compression

arXiv cs.LG / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes AA-SVD, a fast low-rank SVD-based framework that compresses billion-parameter LLMs without requiring retraining.
  • It addresses error propagation caused by distribution shifts during layer-by-layer compression by explicitly modeling both upstream input shifts and anchoring to original layer outputs.
  • Beyond compressing individual layers, AA-SVD refines each Transformer block end-to-end to reduce block-level output distortion and enable joint compensation for accumulated errors.
  • Experiments show AA-SVD outperforms prior SVD-style baselines across a range of compression ratios, with especially large gains under aggressive compression budgets where other methods substantially degrade or fail.

Abstract

We introduce a fast low-rank factorization-based framework for compressing large language models that enables rapid compression of billion-parameter models without retraining. Unlike existing factorization-based approaches that optimize only on the original inputs, ignoring distribution shifts from upstream compression and thus propagating errors forward, or those that rely only on shifted inputs and risk drifting away from the original outputs, our approach accounts for both. Beyond individual layer compression, we further refine each transformer block end-to-end, minimizing block-level output distortion and allowing compressed layers to jointly compensate for accumulated errors. By anchoring each compressed layer to the original outputs while explicitly modeling input distribution shifts, our method finds a low-rank approximation that maintains functional equivalence with the original model. Experiments on large language models show that our method consistently outperforms existing SVD-based baselines across compression ratios, with the advantage becoming increasingly pronounced at aggressive compression budgets, where competing methods degrade substantially or collapse entirely, offering a practical solution for efficient, large-scale model deployment.