Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs

arXiv cs.LG / 3/26/2026

💬 OpinionSignals & Early TrendsModels & Research

Key Points

  • The paper describes an autoresearch-style agent pipeline (using Claude Code) that automatically discovers new white-box adversarial attack algorithms for LLM jailbreaking and prompt injection.
  • The agent iteratively improves on existing attack implementations (e.g., GCG), producing methods that outperform 30+ prior approaches on evaluation benchmarks.
  • Reported results include up to ~40% attack success rate on CBRN-related queries against GPT-OSS-Safeguard-20B, compared with ≤10% for the best existing baselines.
  • The attacks are shown to generalize via transfer learning, reaching 100% ASR on Meta-SecAlign-70B when optimized on surrogate models.
  • The authors release the discovered attacks, baseline implementations, and evaluation code on GitHub, framing the work as an early step toward automated security red-teaming.

Abstract

LLM agents like Claude Code can not only write code but also be used for autonomous AI research and engineering \citep{rank2026posttrainbench, novikov2025alphaevolve}. We show that an \emph{autoresearch}-style pipeline \citep{karpathy2026autoresearch} powered by Claude Code discovers novel white-box adversarial attack \textit{algorithms} that \textbf{significantly outperform all existing (30+) methods} in jailbreaking and prompt injection evaluations. Starting from existing attack implementations, such as GCG~\citep{zou2023universal}, the agent iterates to produce new algorithms achieving up to 40\% attack success rate on CBRN queries against GPT-OSS-Safeguard-20B, compared to \leq10\% for existing algorithms (\Cref{fig:teaser}, left). The discovered algorithms generalize: attacks optimized on surrogate models transfer directly to held-out models, achieving \textbf{100\% ASR against Meta-SecAlign-70B} \citep{chen2025secalign} versus 56\% for the best baseline (\Cref{fig:teaser}, middle). Extending the findings of~\cite{carlini2025autoadvexbench}, our results are an early demonstration that incremental safety and security research can be automated using LLM agents. White-box adversarial red-teaming is particularly well-suited for this: existing methods provide strong starting points, and the optimization objective yields dense, quantitative feedback. We release all discovered attacks alongside baseline implementations and evaluation code at https://github.com/romovpa/claudini.