Polynomial Expansion Rank Adaptation: Enhancing Low-Rank Fine-Tuning with High-Order Interactions
arXiv cs.AI / 4/15/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that LoRA’s linear (bilinear) low-rank update structure limits LLM fine-tuning expressivity because it cannot model nonlinear or higher-order interactions between low-rank factors.
- It introduces Polynomial Expansion Rank Adaptation (PERA), which applies structured polynomial expansion inside the low-rank factor space to generate higher-order interaction terms before composing weight updates.
- PERA is designed to increase expressive capacity without raising the adaptation rank or adding inference cost, mapping updates onto a polynomial manifold for richer nonlinear coupling.
- The authors provide theoretical analysis suggesting improved expressive power and more effective feature utilization compared with existing linear adaptation methods.
- Experiments across multiple benchmarks show PERA outperforms state-of-the-art approaches, with square (second-order) terms playing a key role for strong and robust performance across different rank settings, and the code is publicly released.
Related Articles

RAG in Practice — Part 4: Chunking, Retrieval, and the Decisions That Break RAG
Dev.to
Why dynamically routing multi-timescale advantages in PPO causes policy collapse (and a simple decoupled fix) [R]
Reddit r/MachineLearning
How AI Interview Assistants Are Changing Job Preparation in 2026
Dev.to
Consciousness in Artificial Intelligence: Insights from the Science ofConsciousness
Dev.to

NEW PROMPT INJECTION
Dev.to