Large-Scale Universal Defect Generation: Foundation Models and Datasets
arXiv cs.CV / 4/13/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces UDG, a large-scale dataset of 300K normal/abnormal/mask/caption quadruplets across diverse domains to overcome limited paired defect editing data in prior few-shot methods.
- It presents UniDG, a universal foundation model for defect generation that supports both reference-based generation and text instruction-based defect editing without per-category fine-tuning.
- UniDG uses Defect-Context Editing with adaptive defect cropping and a structured “diptych” input format, and it fuses reference and target conditions via MM-DiT multimodal attention.
- A two-stage training approach (Diversity-SFT followed by Consistency-RFT) is used to improve diversity while also boosting realism and consistency with reference conditions.
- Experiments on MVTec-AD and VisA indicate UniDG outperforms existing few-shot anomaly generation and image insertion/editing baselines and improves downstream anomaly detection/localization.
- The authors plan to release code at the provided GitHub repository.
Related Articles

Black Hat Asia
AI Business

Apple is building smart glasses without a display to serve as an AI wearable
THE DECODER

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Chatbot vs Voicebot: The Real Business Decision Nobody Talks About
Dev.to