Adaptive Prompt Embedding Optimization for LLM Jailbreaking

arXiv cs.AI / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Prompt Embedding Optimization (PEO), a white-box LLM jailbreak method that optimizes the embeddings of the original prompt tokens rather than adding discrete adversarial suffix tokens.
  • The authors argue that although changing embeddings could harm prompt semantics, their optimized embeddings remain close to the originals so the prompt string is preserved after nearest-token projection.
  • PEO uses a multi-round optimization strategy with structured continuation targets and an adaptive, failure-focused schedule to improve attack success.
  • The method can leverage composite response scaffolds in later rounds, but an evaluation with ASR-Judge indicates the improvements are not just formatting artifacts or scaffold-only outputs.
  • Across two harmful-behavior benchmarks, PEO outperforms several competing white-box jailbreak approaches, including discrete suffix search, adversarial embedding appending, and search-based adversarial generation.

Abstract

Existing white-box jailbreak attacks against aligned LLMs typically append discrete adversarial suffixes to the user prompt, which visibly alters the prompt and operates in a combinatorial token space. Prior work has avoided directly optimizing the embeddings of the original prompt tokens, presumably because perturbing them risks destroying the prompt's semantic content. We propose Prompt Embedding Optimization (PEO), a multi-round white-box jailbreak that directly optimizes the embeddings of the original prompt tokens without appending any adversarial tokens, and show that the concern is unfounded: the optimized embeddings remain close enough to their originals that the visible prompt string is preserved exactly after nearest-token projection, and quantitative analysis shows the model's responses stay on topic for the large majority of prompts. PEO combines continuous embedding-space optimization with structured continuation targets and an adaptive failure-focused schedule. Counterintuitively, later PEO rounds can benefit from heuristic composite response scaffolds that are not natural standalone templates, yet ASR-Judge shows that the resulting gains are not merely empty formatting or scaffold-only outputs. Across two standard harmful-behavior benchmarks and competing white-box attacks spanning discrete suffix search, appended adversarial embeddings, and search-based adversarial generation, PEO outperforms all of them in our experiments.