IGU-LoRA: Adaptive Rank Allocation via Integrated Gradients and Uncertainty-Aware Scoring
arXiv cs.LG / 3/17/2026
📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- IGU-LoRA targets the limitation of uniform rank allocation in LoRA by computing within-layer Integrated Gradients sensitivities and aggregating them into a layer-level score for adaptive rank allocation.
- It adds an uncertainty-aware mechanism that uses exponential moving averages and deviation tracking to suppress noisy updates and better calibrate the number of ranks assigned per layer.
- The authors provide a theoretical bound on the error of the parameter-space Integrated Gradients under a pathwise Hessian-Lipschitz condition to guide the quadrature budget.
- Empirical results show IGU-LoRA consistently outperforms strong PEFT baselines at matched parameter budgets across diverse tasks and architectures, boosting downstream accuracy and robustness.
- Ablation experiments confirm the importance of pathwise within-layer sensitivities and the uncertainty-aware rank selection, and the code is publicly available at GitHub.




