Preference-Aligned LoRA Merging: Preserving Subspace Coverage and Addressing Directional Anisotropy

arXiv cs.AI / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper shows that merging multiple LoRA modules is difficult because their update directions occupy different subspaces and contribute unevenly, so naive merging can hurt task-critical directions and bias representation across tasks.
  • It frames the issue using two complementary concepts: subspace coverage (how well merged LoRA directions span representational needs) and anisotropy (how imbalanced the directional influence is).
  • The authors propose TARA-Merging, which aligns merging weights using a preference-weighted cross-entropy pseudo-loss while explicitly preserving task-relevant LoRA subspaces.
  • Experiments on eight vision benchmarks and six NLI benchmarks find that TARA-Merging consistently beats vanilla and LoRA-aware merging baselines, indicating improved robustness and generalization.
  • The results emphasize that effective LoRA merging should address both subspace coverage and directional anisotropy rather than only combining modules or considering task awareness superficially.

Abstract

Merging multiple Low-Rank Adaptation (LoRA) modules is promising for constructing general-purpose systems, yet challenging because LoRA update directions span different subspaces and contribute unevenly. When merged naively, such mismatches can weaken the directions most critical to certain task losses while overemphasizing relatively less important ones, ultimately reducing the model's ability to represent all tasks faithfully. We revisit this problem through two perspectives: subspace coverage, which captures how broadly LoRA directions cover diverse representational directions, and anisotropy, which reflects the imbalance of influence across those directions. We propose TARA-Merging (Task-Rank Anisotropy Alignment), which aligns merging weights using a preference-weighted cross-entropy pseudo-loss while preserving task-relevant LoRA subspaces. This ensures broad subspace coverage and mitigates anisotropy via direction-wise reweighting. Across eight vision and six NLI benchmarks, TARA-Merging consistently outperforms vanilla and LoRA-aware baselines, demonstrating strong robustness and generalization, and highlighting the importance of addressing both subspace coverage and anisotropy in LoRA merging.