BoostLoRA: Growing Effective Rank by Boosting Adapters
arXiv cs.LG / 5/1/2026
📰 NewsModels & Research
Key Points
- BoostLoRA addresses a key limitation of ultra-low-parameter PEFT by enabling model expressivity to grow beyond a fixed low-rank subspace cap.
- It uses an iterative gradient-boosting approach that trains minimal adapters only on examples the current model mispredicts, then merges them while discarding adapters afterward to avoid inference overhead.
- A ROTATE SVD basis strategy assigns each training round to an orthogonal subspace, making the cumulative effective rank increase linearly with the number of rounds.
- Experiments on Qwen2.5-3B show strong gains over TinyLoRA and full fine-tuning on GSM8K, MATH-500, MBPP, and HumanEval, with full fine-tuning performing worse on code generation.
- The method also demonstrates cross-architecture transfer on protein binding classification using ESM2-650M, suggesting broader applicability of the training/merging strategy.
Related Articles

Why Autonomous Coding Agents Keep Failing — And What Actually Works
Dev.to

Mistral's new flagship Medium 3.5 folds chat, reasoning, and code into one model
THE DECODER

Qualcomm teases ‘dedicated CPU for agentic experiences’ and ‘agentic smartphones’
The Register
Finetuning Dataset: Claude Opus 4.6/4.7 - 8.7k Chats
Reddit r/LocalLLaMA
![Phosphene local video and audio generation for Apple Silicon open source (LTX 2.3) [P]](/_next/image?url=https%3A%2F%2Fpreview.redd.it%2Fvutakjb0vgyg1.png%3Fwidth%3D140%26height%3D59%26auto%3Dwebp%26s%3D08ecb95fd65ade25c924988f1992e9abe3d79f62&w=3840&q=75)
Phosphene local video and audio generation for Apple Silicon open source (LTX 2.3) [P]
Reddit r/MachineLearning