AI Navigate

W2T: LoRA Weights Already Know What They Can Do

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper shows that LoRA checkpoints store task-specific updates in low-rank weight matrices, but different factorizations can represent the same update, complicating interpretation.
  • It proposes Weight2Token (W2T), a method that maps each LoRA update to a canonical form using QR decomposition followed by SVD to remove factorization ambiguity.
  • The canonical factors are tokenized and processed by a Transformer to produce a weight-space embedding that reflects the adapter's behavior without running the base model or accessing training data.
  • Across language and vision LoRA collections, W2T yields strong results for attribute classification, performance prediction, and adapter retrieval, and the authors release code on GitHub.

Abstract

Each LoRA checkpoint compactly stores task-specific updates in low-rank weight matrices, offering an efficient way to adapt large language models to new tasks and domains. In principle, these weights already encode what the adapter does and how well it performs. In this paper, we ask whether this information can be read directly from the weights, without running the base model or accessing training data. A key obstacle is that a single LoRA update can be factorized in infinitely many ways. Without resolving this ambiguity, models trained on the factors may fit the particular factorization rather than the underlying update. To this end, we propose \methodfull, which maps each LoRA update to a provably canonical form via QR decomposition followed by SVD, so that all equivalent factorizations share the same representation. The resulting components are then tokenized and processed by a Transformer to produce a weight-space embedding. Across language and vision LoRA collections, W2T achieves strong results on attribute classification, performance prediction, and adapter retrieval, demonstrating that LoRA weights reliably indicate model behavior once factorization ambiguity is removed. Code is available at https://github.com/xiaolonghan2000/Weight2Token.