Sharpness-Aware Poisoning: Enhancing Transferability of Injective Attacks on Recommender Systems

arXiv cs.LG / 4/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how injective data poisoning attacks against recommender systems can be less effective when attackers rely on a single fixed surrogate model instead of the unknown victim model.
  • It argues that poisoned data optimized for the surrogate does not reliably transfer to victim models when the surrogate and victim architectures differ significantly.
  • To improve transferability, the authors propose Sharpness-Aware Poisoning (SharpAP), which uses a sharpness-aware minimization idea to approximate the worst-case victim model during attack optimization.
  • SharpAP is posed as a min-max-min tri-level optimization problem and embedded into an iterative attack process to produce poisoned data that is more robust to structural shifts between models.
  • Experiments on three real-world datasets show SharpAP can significantly enhance attack transferability compared with prior approaches.

Abstract

Recommender Systems~(RS) have been shown to be vulnerable to injective attacks, where attackers inject limited fake user profiles to promote the exposure of target items to real users for unethical gains (e.g., economic or political advantages). Since attackers typically lack knowledge of the victim model deployed in the target RS, existing methods resort to using a fixed surrogate model to mimic the potential victim model. Despite considerable progress, we argue that the assumption that \textit{poisoned data generated for the surrogate model can be used to attack other victim models} is wishful. When there are significant structural discrepancies between the surrogate and victim models, the attack transferability inevitably suffers. Intuitively, if we can identify the worst-case victim model and iteratively optimize the poisoning effect specifically against it, then the generated poisoned data would be better transferred to other victim models. However, exactly identifying the worst-case victim model during the attack process is challenging due to the large space of victim models. To this end, in this work, we propose a novel attack method called Sharpness-Aware Poisoning (\textit{SharpAP}). Specifically, it employs the sharpness-aware minimization principle to seek the approximately worst-case victim model and optimizes the poisoned data specifically for this worst-case model. The poisoning attack with SharpAP is formulated as a min-max-min tri-level optimization problem. By integrating SharpAP into the iterative process for attacks, our method can generate more robust poisoned data which is less sensitive to the shift of model structure, mitigating the overfitting to the surrogate model. Comprehensive experimental comparisons on three real-world datasets demonstrate that ame~can significantly enhance the attack transferability.