Polynomial Expansion Rank Adaptation: Enhancing Low-Rank Fine-Tuning with High-Order Interactions

arXiv cs.AI / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LoRA’s linear (bilinear) low-rank update structure limits LLM fine-tuning expressivity because it cannot model nonlinear or higher-order interactions between low-rank factors.
  • It introduces Polynomial Expansion Rank Adaptation (PERA), which applies structured polynomial expansion inside the low-rank factor space to generate higher-order interaction terms before composing weight updates.
  • PERA is designed to increase expressive capacity without raising the adaptation rank or adding inference cost, mapping updates onto a polynomial manifold for richer nonlinear coupling.
  • The authors provide theoretical analysis suggesting improved expressive power and more effective feature utilization compared with existing linear adaptation methods.
  • Experiments across multiple benchmarks show PERA outperforms state-of-the-art approaches, with square (second-order) terms playing a key role for strong and robust performance across different rank settings, and the code is publicly released.

Abstract

Low-rank adaptation (LoRA) is a widely used strategy for efficient fine-tuning of large language models (LLMs), but its strictly linear structure fundamentally limits expressive capacity. The bilinear formulation of weight updates captures only first-order dependencies between low-rank factors, restricting the modeling of nonlinear and higher-order parameter interactions. In this paper, we propose Polynomial Expansion Rank Adaptation (PERA), a novel method that introduces structured polynomial expansion directly into the low-rank factor space. By expanding each low-rank factor to synthesize high-order interaction terms before composition, PERA transforms the adaptation space into a polynomial manifold capable of modeling richer nonlinear coupling without increasing rank or inference cost. We provide theoretical analysis demonstrating that PERA offers enhanced expressive capacity and more effective feature utilization compare to existing linear adaptation approaches. Empirically, PERA consistently outperforms state-of-the-art methods across diverse benchmarks. Notably, our experiments show that incorporating high-order nonlinear components particularly square terms is crucial for enhancing expressive capacity and maintaining strong and robust performance under various rank settings. Our code is available at https://github.com/zhangwenhao6/PERA