AI Navigate

Systematic Scaling Analysis of Jailbreak Attacks in Large Language Models

arXiv cs.LG / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a scaling-law framework that models jailbreak attacks as compute-bounded optimization and measures progress using a shared FLOPs axis across attack methods, model families, and harm types.
  • It empirically evaluates four jailbreak paradigms—optimization-based attacks, self-refinement prompting, sampling-based selection, and genetic optimization—across multiple model scales and harmful goals.
  • Prompting-based attacks are found to be more compute-efficient than optimization-based methods, with the authors reframing prompt-based updates as optimization in prompt space to explain this gap.
  • Attacks occupy distinct success–stealthiness operating points, with prompting-based methods achieving high both in terms of success and stealth.
  • Vulnerability is highly goal-dependent, with misinformation-related harms generally easier to elicit than other non-misinformation harms.

Abstract

Large language models remain vulnerable to jailbreak attacks, yet we still lack a systematic understanding of how jailbreak success scales with attacker effort across methods, model families, and harm types. We initiate a scaling-law framework for jailbreaks by treating each attack as a compute-bounded optimization procedure and measuring progress on a shared FLOPs axis. Our systematic evaluation spans four representative jailbreak paradigms, covering optimization-based attacks, self-refinement prompting, sampling-based selection, and genetic optimization, across multiple model families and scales on a diverse set of harmful goals. We investigate scaling laws that relate attacker budget to attack success score by fitting a simple saturating exponential function to FLOPs--success trajectories, and we derive comparable efficiency summaries from the fitted curves. Empirically, prompting-based paradigms tend to be the most compute-efficient compared to optimization-based methods. To explain this gap, we cast prompt-based updates into an optimization view and show via a same-state comparison that prompt-based attacks more effectively optimize in prompt space. We also show that attacks occupy distinct success--stealthiness operating points with prompting-based methods occupying the high-success, high-stealth region. Finally, we find that vulnerability is strongly goal-dependent: harms involving misinformation are typically easier to elicit than other non-misinformation harms.