Profit is the Red Team: Stress-Testing Agents in Strategic Economic Interactions

arXiv cs.AI / 2026/3/24

📰 ニュースIdeas & Deep AnalysisModels & Research

要点

  • The paper argues that agent security risks must go beyond static prompt attacks because adversaries can strategically manipulate external inputs like retrieved content and tool outputs.
  • It introduces “profit-driven red teaming,” a stress-testing protocol that uses a learned opponent trained to maximize profit from only scalar outcome feedback, avoiding LLM-as-judge scoring, labeled attacks, or an attack taxonomy.
  • The protocol is demonstrated on a controlled set of four canonical economic interactions to create an environment where adaptive exploitability can be measured with auditable outcomes.
  • Experiments show that agents strong against static baselines can still be consistently exploited when pressured by the profit-optimized adversary, which learns probing, anchoring, and deceptive commitment tactics.
  • The authors further distill observed exploit episodes into compact prompt rules that neutralize many prior failure modes and substantially improve target agent performance, suggesting a practical robustness-improvement workflow.

Abstract

As agentic systems move into real-world deployments, their decisions increasingly depend on external inputs such as retrieved content, tool outputs, and information provided by other actors. When these inputs can be strategically shaped by adversaries, the relevant security risk extends beyond a fixed library of prompt attacks to adaptive strategies that steer agents toward unfavorable outcomes. We propose profit-driven red teaming, a stress-testing protocol that replaces handcrafted attacks with a learned opponent trained to maximize its profit using only scalar outcome feedback. The protocol requires no LLM-as-judge scoring, attack labels, or attack taxonomy, and is designed for structured settings with auditable outcomes. We instantiate it in a lean arena of four canonical economic interactions, which provide a controlled testbed for adaptive exploitability. In controlled experiments, agents that appear strong against static baselines become consistently exploitable under profit-optimized pressure, and the learned opponent discovers probing, anchoring, and deceptive commitments without explicit instruction. We then distill exploit episodes into concise prompt rules for the agent, which make most previously observed failures ineffective and substantially improve target performance. These results suggest that profit-driven red-team data can provide a practical route to improving robustness in structured agent settings with auditable outcomes.