AI Navigate

GR-SAP: Generative Replay for Safety Alignment Preservation during Fine-Tuning

arXiv cs.CL / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • GR-SAP introduces a unified generative replay framework that synthesizes domain-specific alignment data from LLMs to preserve safety alignment during downstream fine-tuning.
  • The approach tackles the issue that original alignment data is often inaccessible, showing synthetic data can serve as a reliable proxy during training.
  • The paper provides theoretical and empirical analyses across multiple models and tasks demonstrating that GR-SAP substantially mitigates safety degradation while maintaining downstream performance.
  • The code is released on GitHub, enabling implementation and replication of the method.

Abstract

Recent studies show that the safety alignment of large language models (LLMs) can be easily compromised even by seemingly non-adversarial fine-tuning. To preserve safety alignment during fine-tuning, a widely used strategy is to jointly optimize safety and task objectives by mixing in the original alignment data, which is typically inaccessible even for open-weight LLMs. Inspired by generative replay in continual learning, we propose Generative Replay for Safety Alignment Preservation (GR-SAP), a unified framework that synthesizes domain-specific alignment data from LLMs and integrate them during downstream adaption to preserve safety alignment. Theoretical and empirical analyses demonstrate this synthetic data serves as a reliable proxy for the original alignment data. Experiments across various models and downstream tasks show that GR-SAP substantially mitigates fine-tuning-induced safety degradation while maintaining comparable downstream performance. Our code is available at https://github.com/chili-lab/gr-sap.