Continual Safety Alignment via Gradient-Based Sample Selection

arXiv cs.LG / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how continual fine-tuning of large language models can lead to alignment drift, degrading safety behaviors like refusal accuracy, truthfulness, and commonsense reasoning.
  • Empirical results suggest training samples contribute unevenly to drift: high-gradient samples worsen safety alignment and pull the model toward pretrained distributions, while moderate-gradient samples support task learning with less alignment loss.
  • The authors propose a gradient-based sample selection strategy that filters out high-gradient samples during fine-tuning to preserve safety alignment.
  • Across multiple model families and continual domain tasks, the method significantly improves alignment preservation while keeping task performance competitive, and it does so without needing curated safe datasets or architectural changes.
  • The approach is reported to be robust to different selection ratios, task orderings, and evaluation against diverse attack benchmarks.

Abstract

Large language models require continuous adaptation to new tasks while preserving safety alignment. However, fine-tuning on even benign data often compromises safety behaviors, including refusal of harmful requests, truthfulness, and commonsense reasoning. We investigate which training samples cause alignment drift through a data-centric lens. Our empirical analysis shows samples contribute unequally: high-gradient samples cause greater safety degradation and drive models toward pretrained distributions, while moderate-gradient samples enable task learning with minimal alignment loss. We propose gradient-based sample selection that filters high-gradient samples during fine-tuning. Across multiple model families on continual domain tasks, our method substantially improves alignment preservation while maintaining competitive task performance, without requiring curated safe data or architectural modifications. Our method is robust across selection ratios, task orderings, and diverse attack benchmarks.