IGU-LoRA: Adaptive Rank Allocation via Integrated Gradients and Uncertainty-Aware Scoring
arXiv cs.LG / 3/17/2026
📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- IGU-LoRA targets the limitation of uniform rank allocation in LoRA by computing within-layer Integrated Gradients sensitivities and aggregating them into a layer-level score for adaptive rank allocation.
- It adds an uncertainty-aware mechanism that uses exponential moving averages and deviation tracking to suppress noisy updates and better calibrate the number of ranks assigned per layer.
- The authors provide a theoretical bound on the error of the parameter-space Integrated Gradients under a pathwise Hessian-Lipschitz condition to guide the quadrature budget.
- Empirical results show IGU-LoRA consistently outperforms strong PEFT baselines at matched parameter budgets across diverse tasks and architectures, boosting downstream accuracy and robustness.
- Ablation experiments confirm the importance of pathwise within-layer sensitivities and the uncertainty-aware rank selection, and the code is publicly available at GitHub.
Related Articles

I built an autonomous AI Courtroom using Llama 3.1 8B and CrewAI running 100% locally on my 5070 Ti. The agents debate each other through contextual collaboration.
Reddit r/LocalLLaMA
The Honest Guide to AI Writing Tools in 2026 (What Actually Works)
Dev.to
The Honest Guide to AI Writing Tools in 2026 (What Actually Works)
Dev.to
AI Cybersecurity
Dev.to
Next-Generation LLM Inference Technology: From Flash-MoE to Gemini Flash-Lite, and Local GPU Utilization
Dev.to