DualEdit: Mitigating Safety Fallback in LLM Backdoor Editing via Affirmation-Refusal Regulation

arXiv cs.CL / 2026/3/25

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper identifies a vulnerability in safety-aligned LLMs to backdoor attacks using model editing, but shows that many existing methods become unstable due to “safety fallback,” where the model later reverts to refusals after an affirmative start.
  • It proposes DualEdit, a dual-objective editing framework that both promotes affirmative tokens and suppresses refusal tokens during generation.
  • DualEdit mitigates optimization and generalization issues by using dynamic loss weighting to balance objectives and value anchoring to reduce conflicts caused by diverse refusal/affirmation tokens.
  • Experiments on safety-aligned LLMs report that DualEdit increases attack success by about 10% and reduces safety fallback rate by about 11% compared with baseline approaches.

Abstract

Safety-aligned large language models (LLMs) remain vulnerable to backdoor attacks. Recent model editing-based approaches enable efficient backdoor injection by directly modifying a small set of parameters to map triggers to attacker-desired behaviors. However, we find that existing editing-based attacks are often unstable under safety alignment: the edited model may start with an affirmative prefix but later revert to refusals during generation. We term this phenomenon safety fallback. To mitigate it, we propose DualEdit, a dual-objective model editing framework that simultaneously promotes affirmative tokens and suppresses refusal tokens. DualEdit further addresses two key challenges, objective imbalance and refusal diversity, via two complementary techniques: (1) dynamic loss weighting, which calibrates the relative scales of the two objectives using the pre-edited model to stabilize optimization, and (2) value anchoring, which clusters representative attention value vectors to form compact anchors, reducing conflicts from overly diverse token sets and improving generalization. Experiments on safety-aligned LLMs show that DualEdit improves attack success by 10% and reduces safety fallback rate by 11% over baselines.