W2T: LoRA Weights Already Know What They Can Do
arXiv cs.LG / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper shows that LoRA checkpoints store task-specific updates in low-rank weight matrices, but different factorizations can represent the same update, complicating interpretation.
- It proposes Weight2Token (W2T), a method that maps each LoRA update to a canonical form using QR decomposition followed by SVD to remove factorization ambiguity.
- The canonical factors are tokenized and processed by a Transformer to produce a weight-space embedding that reflects the adapter's behavior without running the base model or accessing training data.
- Across language and vision LoRA collections, W2T yields strong results for attribute classification, performance prediction, and adapter retrieval, and the authors release code on GitHub.




