Continual Safety Alignment via Gradient-Based Sample Selection
arXiv cs.LG / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses how continual fine-tuning of large language models can lead to alignment drift, degrading safety behaviors like refusal accuracy, truthfulness, and commonsense reasoning.
- Empirical results suggest training samples contribute unevenly to drift: high-gradient samples worsen safety alignment and pull the model toward pretrained distributions, while moderate-gradient samples support task learning with less alignment loss.
- The authors propose a gradient-based sample selection strategy that filters out high-gradient samples during fine-tuning to preserve safety alignment.
- Across multiple model families and continual domain tasks, the method significantly improves alignment preservation while keeping task performance competitive, and it does so without needing curated safe datasets or architectural changes.
- The approach is reported to be robust to different selection ratios, task orderings, and evaluation against diverse attack benchmarks.
Related Articles

Every time a new model comes out, the old one is obsolete of course
Reddit r/LocalLLaMA

We built it during the NVIDIA DGX Spark Full-Stack AI Hackathon — and it ended up winning 1st place overall 🏆
Dev.to

Stop Losing Progress: Setting Up a Pro Jupyter Workflow in VS Code (No More Colab Timeouts!)
Dev.to

Building AgentOS: Why I’m Building the AWS Lambda for Insurance Claims
Dev.to

Where we are. In a year, everything has changed. Kimi - Minimax - Qwen - Gemma - GLM
Reddit r/LocalLLaMA