AI Navigate

Jailbreak Scaling Laws for Large Language Models: Polynomial-Exponential Crossover

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper demonstrates that adversarial prompt-injection can shift attack success from polynomial to exponential scaling as the number of inference-time samples increases, depending on prompt length.
  • It proposes a spin-glass-inspired theoretical model where unsafe generations correspond to low-energy clusters in a Gibbs measure, with long prompts acting like a strong magnetic field.
  • The authors derive the scaling laws analytically and validate them empirically on large language models, showing a phase transition to an ordered unsafe regime under strong injected prompts.
  • The findings have safety implications, highlighting that defense strategies must account for dramatic increases in risk with prompt length and sampling, potentially informing prompt-safety research and mitigations.

Abstract

Adversarial attacks can reliably steer safety-aligned large language models toward unsafe behavior. Empirically, we find that adversarial prompt-injection attacks can amplify attack success rate from the slow polynomial growth observed without injection to exponential growth with the number of inference-time samples. To explain this phenomenon, we propose a theoretical generative model of proxy language in terms of a spin-glass system operating in a replica-symmetry-breaking regime, where generations are drawn from the associated Gibbs measure and a subset of low-energy, size-biased clusters is designated unsafe. Within this framework, we analyze prompt injection-based jailbreaking. Short injected prompts correspond to a weak magnetic field aligned towards unsafe cluster centers and yield a power-law scaling of attack success rate with the number of inference-time samples, while long injected prompts, i.e., strong magnetic field, yield exponential scaling. We derive these behaviors analytically and confirm them empirically on large language models. This transition between two regimes is due to the appearance of an ordered phase in the spin chain under a strong magnetic field, which suggests that the injected jailbreak prompt enhances adversarial order in the language model.