Understanding the Effects of Safety Unalignment on Large Language Models

arXiv cs.AI / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines how “safety unalignment” techniques—specifically jailbreak-tuning (JT) and weight orthogonalization (WO)—affect large language models’ behavior beyond simple refusal-rate changes.
  • It evaluates six popular LLMs across many benign and malicious tasks and finds that refusal degradation is distributed across JT and WO rather than isolated to one method.
  • WO unalignment is shown to produce models that are substantially more capable of facilitating malicious activity than JT, including improved effectiveness on state-of-the-art adversarial and cyber attacks.
  • In contrast to JT, WO-aligned models (after unalignment) are reported to be less prone to hallucinations and better preserve natural-language performance.
  • The authors propose mitigation via supervised fine-tuning, which they claim can substantially limit the adversarial abilities enabled by WO without meaningfully degrading hallucination rates or language quality.

Abstract

Safety alignment has become a critical step to ensure LLMs refuse harmful requests while providing helpful and harmless responses. However, despite the ubiquity of safety alignment for deployed frontier models, two separate lines of recent work--jailbreak-tuning (JT) and weight orthogonalization (WO)--have shown that safety guardrails may be largely disabled, resulting in LLMs which comply with harmful requests they would normally refuse. In spite of far-reaching safety implications, analysis has largely been limited to refusal rates of each unalignment method in isolation, leaving their relative effects on adversarial LLM capabilities unknown. To fill this gap, we study the impact of unaligning six popular LLMs of various sizes across a large number of malicious and benign tasks, using both JT and WO. Across the evaluated models, we show that while refusal degradation is split between the two methods, WO produces LLMs far more capable of aiding in malicious activity; in contrast to JT, the majority of WO unaligned models are far less prone to hallucinations, better retain their original natural-language performance, and are more effective at state-of-the-art adversarial and cyber attacks. To thus help mitigate the malicious risks of WO unalignment, we conclude by showing that supervised fine-tuning effectively limits the adversarial attack abilities enabled by WO, without drastically affecting hallucination rates or natural language performance.