A Theoretical Game of Attacks via Compositional Skills

arXiv cs.CL / 5/5/2026

📰 NewsModels & Research

Key Points

  • The paper proposes a theoretical attacker–defender game to study how adversarial prompts can bypass alignment defenses in increasingly capable large language models.
  • It develops a “best-response” attack strategy within the framework and shows close connections to several existing adversarial prompting techniques.
  • The authors analyze the game’s equilibria and demonstrate built-in advantages for attackers.
  • Based on the theory, they derive a provably optimal defense strategy and validate the practical effectiveness of the theoretically optimal attack against multiple LLMs and benchmarks.
  • Empirical results suggest the instantiated attack can outperform existing adversarial prompting methods across diverse settings.

Abstract

As large language models grow increasingly capable, concerns about their safe deployment have intensified. While numerous alignment strategies aim to restrict harmful behavior, these defenses can still be circumvented through carefully designed adversarial prompts. In this work, we introduce a theoretical framework that formalizes a game between an attacker and a defender. Within this framework, we design a theoretical best-response attack strategy and show that it is closely related to many existing adversarial prompting methods. We further analyze the resulting game, characterize its equilibria, and reveal inherent advantages for the attacker. Drawing on our theoretical analysis, we also derive a provably optimal defense strategy. Empirically, we evaluate a practical instantiation of the theoretically optimal attack and observe stronger performance relative to existing adversarial prompting approaches in diverse settings encompassing different LLMs and benchmarks.