Understanding and Improving Continuous Adversarial Training for LLMs via In-context Learning Theory

arXiv cs.LG / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies Continuous Adversarial Training (CAT) for LLM jailbreak defense and provides a first theoretical explanation of why perturbations in the LLM embedding space can counter jailbreak prompts in token space.
  • Using in-context learning theory for linear transformers on in-context linear regression tasks, it proves a robust generalization bound whose strength improves as the embedding-space perturbation radius decreases.
  • It further links the robustness of adversarially trained LLMs to the singular values of the model’s embedding matrix, offering a concrete mechanism for robustness.
  • Based on this theory, the authors propose an improved CAT objective that adds a singular-value-dependent regularization term to improve the jailbreak robustness–utility tradeoff.
  • Experiments on real-world LLMs show the proposed method increases jailbreak robustness without overly sacrificing utility, and the authors release accompanying code.

Abstract

Adversarial training (AT) is an effective defense for large language models (LLMs) against jailbreak attacks, but performing AT on LLMs is costly. To improve the efficiency of AT for LLMs, recent studies propose continuous AT (CAT) that searches for adversarial inputs within the continuous embedding space of LLMs during AT. While CAT has achieved empirical success, its underlying mechanism, i.e., why adversarial perturbations in the embedding space can help LLMs defend against jailbreak prompts synthesized in the input token space, remains unknown. This paper presents the first theoretical analysis of CAT on LLMs based on in-context learning (ICL) theory. For linear transformers trained with adversarial examples from the embedding space on in-context linear regression tasks, we prove a robust generalization bound that has a negative correlation with the perturbation radius in the embedding space. This clearly explains why CAT can defend against jailbreak prompts from the LLM's token space. Further, the robust bound shows that the robustness of an adversarially trained LLM is closely related to the singular values of its embedding matrix. Based on this, we propose to improve LLM CAT by introducing an additional regularization term, which depends on singular values of the LLM's embedding matrix, into the objective function of CAT. Experiments on real-world LLMs demonstrate that our method can help LLMs achieve a better jailbreak robustness-utility tradeoff. The code is available at https://github.com/fshp971/continuous-adv-icl.