Low Rank Adaptation for Adversarial Perturbation

arXiv cs.LG / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether adversarial perturbations generated during adversarial example attacks follow a low-rank structure similar to the low-rank updates used in LoRA for efficient LLM training.
  • It provides theoretical support and broad experiments across attack methods, model architectures, and datasets, finding that adversarial perturbations indeed have an inherently low-rank property.
  • The authors leverage this property to improve black-box adversarial attacks by reducing the search space: they project gradients into a low-dimensional subspace using a reference model and auxiliary data, then restrict perturbation search within that subspace.
  • Across multiple benchmark setups and threat models, the low-rank approach delivers substantial and consistent gains in attack performance versus conventional black-box methods.
  • The findings suggest the low-rank perspective can open new avenues for both stronger adversarial attacks and more effective defenses.

Abstract

Low-Rank Adaptation (LoRA), which leverages the insight that model updates typically reside in a low-dimensional space, has significantly improved the training efficiency of Large Language Models (LLMs) by updating neural network layers using low-rank matrices. Since the generation of adversarial examples is an optimization process analogous to model training, this naturally raises the question: Do adversarial perturbations exhibit a similar low-rank structure? In this paper, we provide both theoretical analysis and extensive empirical investigation across various attack methods, model architectures, and datasets to show that adversarial perturbations indeed possess an inherently low-rank structure. This insight opens up new opportunities for improving both adversarial attacks and defenses. We mainly focus on leveraging this low-rank property to improve the efficiency and effectiveness of black-box adversarial attacks, which often suffer from excessive query requirements. Our method follows a two-step approach. First, we use a reference model and auxiliary data to guide the projection of gradients into a low-dimensional subspace. Next, we confine the perturbation search in black-box attacks to this low-rank subspace, significantly improving the efficiency and effectiveness of the adversarial attacks. We evaluated our approach across a range of attack methods, benchmark models, datasets, and threat models. The results demonstrate substantial and consistent improvements in the performance of our low-rank adversarial attacks compared to conventional methods.