Jailbroken Frontier Models Retain Their Capabilities

arXiv cs.AI / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that as LLM safeguard defenses improve, jailbreak attempts become more complex, but this complexity can impose a “jailbreak tax” that harms task performance.
  • Results from evaluating 28 jailbreak methods across multiple Claude models show that the jailbreak tax decreases as model capability increases, and the strongest jailbreaks cause little to no effective capability reduction on the most capable models.
  • Lower-capability models (e.g., Haiku 4.5) experience much larger performance drops when jailbroken than higher-capability models (e.g., Opus 4.6), with the reported benchmark degradation averaging 33.1% vs. 7.7% under max thinking effort.
  • The study finds reasoning-heavy tasks suffer more degradation than knowledge-recall tasks, while “Boundary Point Jailbreaking” achieves near-perfect classifier evasion with near-zero degradation.
  • The authors conclude that safety cases for frontier models should not depend on the assumption that jailbreaks will meaningfully degrade model capabilities.

Abstract

As language model safeguards become more robust, attackers are pushed toward developing increasingly complex jailbreaks. Prior work has found that this complexity imposes a "jailbreak tax" that degrades the target model's task performance. We show that this tax scales inversely with model capability and that the most advanced jailbreaks effectively yield no reduction in model capabilities. Evaluating 28 jailbreaks on five benchmarks across Claude models ranging in capability from Haiku 4.5 to Opus 4.6, we find Haiku 4.5 loses an average of 33.1% on benchmark performance when jailbroken, while Opus 4.6 at max thinking effort loses only 7.7%. We also observe that across all models, reasoning-heavy tasks display considerably more degradation than knowledge-recall tasks. Finally, Boundary Point Jailbreaking, currently the strongest jailbreak against deployed classifiers, achieves near-perfect classifier evasion with near-zero degradation across safeguarded models. We recommend that safety cases for frontier models should not rely on a meaningful capability degradation from jailbreaks.