GiVA: Gradient-Informed Bases for Vector-Based Adaptation

arXiv cs.CL / 4/24/2026

📰 NewsModels & Research

Key Points

  • The paper introduces GiVA, a gradient-informed initialization strategy designed for vector-based parameter-efficient fine-tuning methods.
  • It targets a key limitation of vector-based adaptation—previous approaches often needed much higher ranks than LoRA to reach similar performance.
  • GiVA reportedly achieves training times comparable to LoRA while preserving the extreme parameter efficiency characteristic of vector-based methods.
  • Across benchmarks in NLU, NLG, and image classification, GiVA consistently outperforms or matches existing vector-based adaptation methods and can be competitive with LoRA.
  • The approach reduces the required rank by about eight times (8×), lowering training costs associated with high-rank settings.

Abstract

As model sizes continue to grow, parameter-efficient fine-tuning has emerged as a powerful alternative to full fine-tuning. While LoRA is widely adopted among these methods, recent research has explored vector-based adaptation methods due to their extreme parameter efficiency. However, these methods typically require substantially higher ranks than LoRA to match its performance, leading to increased training costs. This work introduces GiVA, a gradient-based initialization strategy for vector-based adaptation. It achieves training times comparable to LoRA and maintains the extreme parameter efficiency of vector-based adaptation. We evaluate GiVA across diverse benchmarks, including natural language understanding, natural language generation, and image classification. Experiments show that our approach consistently outperforms or achieves performance competitive with existing vector-based adaptation methods and LoRA while reducing rank requirements by a factor of eight (8\times).