Pruning Unsafe Tickets: A Resource-Efficient Framework for Safer and More Robust LLMs

arXiv cs.LG / 4/20/2026

📰 NewsModels & Research

Key Points

  • The paper argues that even aligned LLMs can produce unsafe outputs because pretraining leaves behind “unsafe subnetworks” that existing methods (SFT/RLHF) do not explicitly eliminate.
  • It proposes a resource-efficient, gradient-free pruning framework that identifies and removes parameters linked to unsafe behaviors while keeping overall model utility.
  • The approach is designed to be lightweight—using only modest GPU resources—and is reported to generalize across architectures and quantized model variants.
  • Experiments indicate large reductions in unsafe generations and better robustness against jailbreak attacks, with minimal loss in utility.
  • Interpreting results via the Lottery Ticket Hypothesis, the authors claim pruning can remove “unsafe tickets” and expose “safety tickets,” enabling a post-hoc alignment method for deployment in constrained environments.

Abstract

Machine learning models are increasingly deployed in real-world applications, but even aligned models such as Mistral and LLaVA still exhibit unsafe behaviors inherited from pre-training. Current alignment methods like SFT and RLHF primarily encourage models to generate preferred responses, but do not explicitly remove the unsafe subnetworks that trigger harmful outputs. In this work, we introduce a resource-efficient pruning framework that directly identifies and removes parameters associated with unsafe behaviors while preserving model utility. Our method employs a gradient-free attribution mechanism, requiring only modest GPU resources, and generalizes across architectures and quantized variants. Empirical evaluations on ML models show substantial reductions in unsafe generations and improved robustness against jailbreak attacks, with minimal utility loss. From the perspective of the Lottery Ticket Hypothesis, our results suggest that ML models contain "unsafe tickets" responsible for harmful behaviors, and pruning reveals "safety tickets" that maintain performance while aligning outputs. This provides a lightweight, post-hoc alignment strategy suitable for deployment in resource-constrained settings.

Pruning Unsafe Tickets: A Resource-Efficient Framework for Safer and More Robust LLMs | AI Navigate