Sharpness-Aware Poisoning: Enhancing Transferability of Injective Attacks on Recommender Systems
arXiv cs.LG / 4/27/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies how injective data poisoning attacks against recommender systems can be less effective when attackers rely on a single fixed surrogate model instead of the unknown victim model.
- It argues that poisoned data optimized for the surrogate does not reliably transfer to victim models when the surrogate and victim architectures differ significantly.
- To improve transferability, the authors propose Sharpness-Aware Poisoning (SharpAP), which uses a sharpness-aware minimization idea to approximate the worst-case victim model during attack optimization.
- SharpAP is posed as a min-max-min tri-level optimization problem and embedded into an iterative attack process to produce poisoned data that is more robust to structural shifts between models.
- Experiments on three real-world datasets show SharpAP can significantly enhance attack transferability compared with prior approaches.
Related Articles

Subagents: The Building Block of Agentic AI
Dev.to

DeepSeek-V4 Models Could Change Global AI Race
AI Business

Got OpenAI's privacy filter model running on-device via ExecuTorch
Reddit r/LocalLLaMA

The Agent-Skill Illusion: Why Prompt-Based Control Fails in Multi-Agent Business Consulting Systems
Dev.to
We Built a Voice AI Receptionist in 8 Weeks — Every Decision We Made and Why
Dev.to