Optimal Brain Decomposition for Accurate LLM Low-Rank Approximation

arXiv cs.LG / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how to perform low-rank approximation of LLM weight matrices for fine-tuning and inference, extending beyond common SVD-after-activation-whitening approaches.
  • It proposes OBD-LLM, which performs decomposition in the model space using second-order information from the Hessian rather than relying on input-side whitening alone.
  • By applying a rigorous Kronecker-factorization of the Hessian, the method accounts for both the layer’s input and output information, improving the quality of the approximation.
  • The approach is “loss-aware” and uses bi-directional whitening on the weight matrix, yielding a closed-form optimal decomposition solution.
  • Experiments report approximately 20–40% better results than prior state-of-the-art decomposition via SVD-LLM.

Abstract

Low-rank decomposition has emerged as an important problem in Large Language Model (LLM) fine-tuning and inference. Through Singular Value Decomposition (SVD), the weight matrix can be factorized into low-rank spaces optimally. Previously, a common practice was to decompose the weight in the activation-whitened space, and then achieve satisfying results. In this work, we propose Optimal Brain Decomposition LLM (OBD-LLM), which studies the decomposition problem in the model space by utilizing second-order Hessian information. Through a rigorous Kronecker-factorization of the Hessian, we show that the decomposition needs to consider both input and output information of the layer, and achieves much better decomposition results compared to input only method. Our loss-aware decomposition method involves a bi-directional whitening on the weight matrix. As a result, OBD-LLM is a closed-form solution for the optimal decomposition of weights in the language model. Remarkably, we achieve ~20-40\% better results than previous state-of-the-art decomposition methods, the SVD-LLM.