Erasing Thousands of Concepts: Towards Scalable and Practical Concept Erasure for Text-to-Image Diffusion Models
arXiv cs.CV / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Erasing Thousands of Concepts (ETC), a scalable framework for concept erasure in text-to-image (T2I) diffusion models that can remove thousands of concepts while maintaining generation quality.
- ETC uses a Student’s t-distribution Mixture Model (tMM) to model low-rank concept distributions and applies affine optimal transport to precisely target erasure without relying on predefined anchor concepts.
- It trains an MoE-based “MoEraser” module to remove target concept embeddings while preserving anchor embeddings, improving the selectivity of the erasure.
- By injecting noise into the text embedding projector and fine-tuning MoEraser, the method gains robustness against white-box attacks such as module removal.
- Experiments across 2,000+ concepts and multiple diffusion models show ETC surpassing prior work in scalability and precision for large-scale concept erasure.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Every time a new model comes out, the old one is obsolete of course
Reddit r/LocalLLaMA

We built it during the NVIDIA DGX Spark Full-Stack AI Hackathon — and it ended up winning 1st place overall 🏆
Dev.to

Stop Losing Progress: Setting Up a Pro Jupyter Workflow in VS Code (No More Colab Timeouts!)
Dev.to

Building AgentOS: Why I’m Building the AWS Lambda for Insurance Claims
Dev.to

Where we are. In a year, everything has changed. Kimi - Minimax - Qwen - Gemma - GLM
Reddit r/LocalLLaMA