Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models

arXiv cs.CL / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights that instruction-tuned LLMs are vulnerable to backdoor attacks due to training data being sourced from humans or the web, allowing adversaries to poison a small subset to implant hidden behaviors.
  • It introduces MB-Defense, a two-stage training pipeline combining “Defensive Poisoning” (merging attacker and defensive triggers into a unified backdoor representation) with “Backdoor Neutralization” (breaking that representation via further training to restore clean behavior).
  • Experiments reported across multiple LLMs indicate MB-Defense significantly reduces attack success rates while largely preserving the models’ instruction-following capabilities.
  • The authors claim the approach is generalizable and data-efficient, targeting robustness against both known and unseen backdoor threat variants.

Abstract

Large Language Models (LLMs) have greatly advanced Natural Language Processing (NLP), particularly through instruction tuning, which enables broad task generalization without additional fine-tuning. However, their reliance on large-scale datasets-often collected from human or web sources-makes them vulnerable to backdoor attacks, where adversaries poison a small subset of data to implant hidden behaviors. Despite this growing risk, defenses for instruction-tuned models remain underexplored. We propose MB-Defense (Merging & Breaking Defense Framework), a novel training pipeline that immunizes instruction-tuned LLMs against diverse backdoor threats. MB-Defense comprises two stages: (i) Defensive Poisoning, which merges attacker and defensive triggers into a unified backdoor representation, and (ii) Backdoor Neutralization, which breaks this representation through additional training to restore clean behavior. Extensive experiments across multiple LLMs show that MB-Defense substantially lowers attack success rates while preserving instruction-following ability. Our method offers a generalizable and data-efficient defense strategy, improving the robustness of instruction-tuned LLMs against unseen backdoor attacks.