SHIFT: Steering Hidden Intermediates in Flow Transformers
arXiv cs.CV / 4/13/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces SHIFT, a lightweight inference-time framework for DiT (Diffusion Transformer) models that removes unwanted visual concepts by manipulating intermediate activations.
- SHIFT learns steering vectors and applies them dynamically across selected layers and timesteps to suppress specific concepts while retaining prompt-relevant content and image quality.
- The method is presented as retraining-free (no time-consuming retraining), aiming to control generations effectively across diverse prompts and targets.
- Beyond suppression, SHIFT can steer outputs into a desired style domain or bias images toward adding/changing target objects, suggesting broader controllability.
- The approach is inspired by activation steering techniques used in large language models, transferring the idea to diffusion/DiT generation workflows.
Related Articles

Black Hat Asia
AI Business

Apple is building smart glasses without a display to serve as an AI wearable
THE DECODER

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Chatbot vs Voicebot: The Real Business Decision Nobody Talks About
Dev.to