AI Navigate

Exposing Long-Tail Safety Failures in Large Language Models through Efficient Diverse Response Sampling

arXiv cs.CL / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper demonstrates that safety-tuning can still miss rare unsafe behaviors, leaving long-tail risks in LLM outputs.
  • It introduces Progressive Diverse Population Sampling (PDPS), a method that combines stochastic token sampling with diversity-aware selection to generate a large pool of candidate responses and retain a compact, diverse subset.
  • PDPS achieves jailbreak success rates comparable to large-scale IID sampling while using only 8% to 29% of the computational cost, and under limited-response settings it improves success rates by 26% to 40% over IID sampling and Diverse Beam Search.
  • Across multiple jailbreak benchmarks and open-source LLMs, PDPS yields more diverse unsafe outputs, broadening the range of detectable failures.

Abstract

Safety tuning through supervised fine-tuning and reinforcement learning from human feedback has substantially improved the robustness of large language models (LLMs). However, it often suppresses rather than eliminates unsafe behaviors, leaving rare but critical failures hidden in the long tail of the output distribution. While most red-teaming work emphasizes adversarial prompt search (input-space optimization), we show that safety failures can also be systematically exposed through diverse response generation (output-space exploration) for a fixed safety-critical prompt, where increasing the number and diversity of sampled responses can drive jailbreak success rates close to unity. To efficiently uncover such failures, we propose Progressive Diverse Population Sampling (PDPS), which combines stochastic token-level sampling with diversity-aware selection to explore a large candidate pool of responses and retain a compact, semantically diverse subset. Across multiple jailbreak benchmarks and open-source LLMs, PDPS achieves attack success rates comparable to large-scale IID sampling while using only 8% to 29% of the computational cost. Under limited-response settings, it improves success rates by 26% to 40% over IID sampling and Diverse Beam Search. Furthermore, responses generated by PDPS exhibit both a higher number and greater diversity of unsafe outputs, demonstrating its effectiveness in uncovering a broader range of failures.