SLaB: Sparse-Lowrank-Binary Decomposition for Efficient Large Language Models

arXiv cs.LG / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • SLaBは、LLMの各線形層の重みを「疎(sparse)」「低ランク(low-rank)」「バイナリ(binary)」の3成分に分解することで、計算・メモリ負荷を抑えつつ性能劣化を減らす新しい圧縮枠組みを提案しています。
  • 従来法で問題になりがちな高圧縮率でも良好な性能を維持することを狙い、再学習(retraining)を不要とし、活性(activation)を考慮したプルーニング指標で分解を導く点が特徴です。
  • Llama系モデルの実験では、既存手法に比べて50%圧縮でperplexityを最大36%改善し、ゼロショット課題の精度でベースラインより最大8.98%向上したと報告されています。

Abstract

The rapid growth of large language models (LLMs) presents significant deployment challenges due to their massive computational and memory demands. While model compression, such as network pruning, offers potential solutions, most existing methods often fail to maintain good performance at high compression ratios. To address this, we propose SLaB, a novel framework that decomposes each linear layer weight into three complementary components: a sparse matrix, a low-rank matrix, and a binary matrix. SLaB eliminates the need for retraining and leverages activation-aware pruning scores to guide the decomposition process. Experiments on Llama-family models demonstrate that SLaB achieves state-of-the-art performance, reducing perplexity by up to 36% compared to existing methods at 50% compression and improving accuracy by up to 8.98% over the baseline on zero-shot tasks.