SafeRoPE: Risk-specific Head-wise Embedding Rotation for Safe Generation in Rectified Flow Transformers

arXiv cs.CV / 4/3/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes rectified-flow transformer text-to-image models (e.g., MMDiT) and shows that unsafe semantics are concentrated in identifiable, low-dimensional attention subspaces within a small set of safety-critical heads.
  • It introduces SafeRoPE, which uses head-wise decomposition of unsafe embeddings to compute a Latent Risk Score (LRS) by projecting input vectors onto these unsafe subspaces.
  • SafeRoPE applies targeted, head-wise perturbations to Rotary Positional Embedding (RoPE) on query/key vectors to suppress unsafe concepts while preserving benign content and overall image quality.
  • By combining LRS-guided risk estimation with RoPE-based risk-specific rotation, SafeRoPE provides lightweight, fine-grained safety mitigation without the costly fine-tuning or attention-modulation approaches that are hard to adapt to transformer-based diffusion models.
  • The authors report extensive experiments achieving state-of-the-art trade-offs between harmful-content mitigation and utility preservation for safe generation in MMDiT, and they release code on GitHub.

Abstract

Recent Text-to-Image (T2I) models based on rectified-flow transformers (e.g., SD3, FLUX) achieve high generative fidelity but remain vulnerable to unsafe semantics, especially when triggered by multi-token interactions. Existing mitigation methods largely rely on fine-tuning or attention modulation for concept unlearning; however, their expensive computational overhead and design tailored to U-Net-based denoisers hinder direct adaptation to transformer-based diffusion models (e.g., MMDiT). In this paper, we conduct an in-depth analysis of the attention mechanism in MMDiT and find that unsafe semantics concentrate within interpretable, low-dimensional subspaces at head level, where a finite set of safety-critical heads is responsible for unsafe feature extraction. We further observe that perturbing the Rotary Positional Embedding (RoPE) applied to the query and key vectors can effectively modify some specific concepts in the generated images. Motivated by these insights, we propose SafeRoPE, a lightweight and fine-grained safe generation framework for MMDiT. Specifically, SafeRoPE first constructs head-wise unsafe subspaces by decomposing unsafe embeddings within safety-critical heads, and computes a Latent Risk Score (LRS) for each input vector via projection onto these subspaces. We then introduce head-wise RoPE perturbations that can suppress unsafe semantics without degrading benign content or image quality. SafeRoPE combines both head-wise LRS and RoPE perturbations to perform risk-specific head-wise rotation on query and key vector embeddings, enabling precise suppression of unsafe outputs while maintaining generation fidelity. Extensive experiments demonstrate that SafeRoPE achieves SOTA performance in balancing effective harmful content mitigation and utility preservation for safe generation of MMDiT. Codes are available at https://github.com/deng12yx/SafeRoPE.