A Theoretical Game of Attacks via Compositional Skills
arXiv cs.CL / 5/5/2026
📰 NewsModels & Research
Key Points
- The paper proposes a theoretical attacker–defender game to study how adversarial prompts can bypass alignment defenses in increasingly capable large language models.
- It develops a “best-response” attack strategy within the framework and shows close connections to several existing adversarial prompting techniques.
- The authors analyze the game’s equilibria and demonstrate built-in advantages for attackers.
- Based on the theory, they derive a provably optimal defense strategy and validate the practical effectiveness of the theoretically optimal attack against multiple LLMs and benchmarks.
- Empirical results suggest the instantiated attack can outperform existing adversarial prompting methods across diverse settings.
Related Articles

Why Retail Chargeback Recovery Could Be AgentHansa's First Real PMF
Dev.to

Last Week in AI #340 - OpenAI vs Musk + Microsoft, DeepSeek v4, Vision Banana
Last Week in AI

Trying to train tiny LLMs on length constrained reddit posts summarization task using GRPO on 3xMac Minis - updates!
Reddit r/LocalLLaMA

Uber Shares What Happens When 1.500 AI Agents Hit Production
Reddit r/artificial
vibevoice.cpp: Microsoft VibeVoice (TTS + long-form ASR with diarization) ported to ggml/C++, runs on CPU/CUDA/Metal/Vulkan, no Python at inference
Reddit r/LocalLLaMA