Preference-Aligned LoRA Merging: Preserving Subspace Coverage and Addressing Directional Anisotropy
arXiv cs.AI / 3/30/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper shows that merging multiple LoRA modules is difficult because their update directions occupy different subspaces and contribute unevenly, so naive merging can hurt task-critical directions and bias representation across tasks.
- It frames the issue using two complementary concepts: subspace coverage (how well merged LoRA directions span representational needs) and anisotropy (how imbalanced the directional influence is).
- The authors propose TARA-Merging, which aligns merging weights using a preference-weighted cross-entropy pseudo-loss while explicitly preserving task-relevant LoRA subspaces.
- Experiments on eight vision benchmarks and six NLI benchmarks find that TARA-Merging consistently beats vanilla and LoRA-aware merging baselines, indicating improved robustness and generalization.
- The results emphasize that effective LoRA merging should address both subspace coverage and directional anisotropy rather than only combining modules or considering task awareness superficially.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer
Simon Willison's Blog
Beyond the Chatbot: Engineering Multi-Agent Ecosystems in 2026
Dev.to

I missed the "fun" part in software development
Dev.to

The Billion Dollar Tax on AI Agents
Dev.to

Hermes Agent: A Self-Improving AI Agent That Runs Anywhere
Dev.to