W2T: LoRA Weights Already Know What They Can Do
arXiv cs.LG / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper shows that LoRA checkpoints store task-specific updates in low-rank weight matrices, but different factorizations can represent the same update, complicating interpretation.
- It proposes Weight2Token (W2T), a method that maps each LoRA update to a canonical form using QR decomposition followed by SVD to remove factorization ambiguity.
- The canonical factors are tokenized and processed by a Transformer to produce a weight-space embedding that reflects the adapter's behavior without running the base model or accessing training data.
- Across language and vision LoRA collections, W2T yields strong results for attribute classification, performance prediction, and adapter retrieval, and the authors release code on GitHub.
Related Articles
How political censorship actually works inside Qwen, DeepSeek, GLM, and Yi: Ablation and behavioral results across 9 models
Reddit r/LocalLLaMA
Engenharia de Prompt: Por Que a Forma Como Você Pergunta Muda Tudo(Um guia introdutório)
Dev.to
The Obligor
Dev.to
The Markup
Dev.to
2026 年 AI 部落格變現完整攻略:從第一篇文章到月收入 $1000
Dev.to