AI Navigate

PISmith: Reinforcement Learning-based Red Teaming for Prompt Injection Defenses

arXiv cs.LG / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • PISmith introduces a reinforcement learning-based red-teaming framework to systematically assess prompt-injection defenses under a practical black-box setting by training an attack LLM to optimize injected prompts against defended LLMs.
  • The authors show that standard GRPO-based attacks suffer from reward sparsity, and they address this with adaptive entropy regularization and dynamic advantage weighting to sustain exploration and learn from scarce successes.
  • Extensive evaluation across 13 benchmarks demonstrates that state-of-the-art prompt injection defenses remain vulnerable to adaptive attacks, with PISmith achieving the highest attack success rates compared to 7 baselines across static, search-based, and RL-based attack strategies.
  • PISmith also exhibits strong performance in agentic settings on InjecAgent and AgentDojo against both open-source and closed-source LLMs (e.g., GPT-4o-mini and GPT-5-nano).
  • The code for PISmith is released at https://github.com/albert-y1n/PISmith.

Abstract

Prompt injection poses serious security risks to real-world LLM applications, particularly autonomous agents. Although many defenses have been proposed, their robustness against adaptive attacks remains insufficiently evaluated, potentially creating a false sense of security. In this work, we propose PISmith, a reinforcement learning (RL)-based red-teaming framework that systematically assesses existing prompt-injection defenses by training an attack LLM to optimize injected prompts in a practical black-box setting, where the attacker can only query the defended LLM and observe its outputs. We find that directly applying standard GRPO to attack strong defenses leads to sub-optimal performance due to extreme reward sparsity -- most generated injected prompts are blocked by the defense, causing the policy's entropy to collapse before discovering effective attack strategies, while the rare successes cannot be learned effectively. In response, we introduce adaptive entropy regularization and dynamic advantage weighting to sustain exploration and amplify learning from scarce successes. Extensive evaluation on 13 benchmarks demonstrates that state-of-the-art prompt injection defenses remain vulnerable to adaptive attacks. We also compare PISmith with 7 baselines across static, search-based, and RL-based attack categories, showing that PISmith consistently achieves the highest attack success rates. Furthermore, PISmith achieves strong performance in agentic settings on InjecAgent and AgentDojo against both open-source and closed-source LLMs (e.g., GPT-4o-mini and GPT-5-nano). Our code is available at https://github.com/albert-y1n/PISmith.