UniSAFE: A Comprehensive Benchmark for Safety Evaluation of Unified Multimodal Models
arXiv cs.CV / 3/19/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- UniSAFE is the first comprehensive benchmark for system-level safety evaluation of Unified Multimodal Models (UMMs) across 7 I/O modality combinations, addressing fragmentation in existing safety benchmarks.
- The benchmark comprises 6,802 curated instances and is used to evaluate 15 state-of-the-art UMMs, including both proprietary and open-source models.
- Findings reveal vulnerabilities across current UMMs, with elevated safety violations in multi-image composition and multi-turn settings, and image-output tasks being more vulnerable than text-output tasks.
- The work underscores the need for stronger system-level safety alignment for UMMs and publicly provides the code and data at the project's GitHub repository.
Related Articles

Check out this article on AI-Driven Reporting 2.0: From Manual Bottlenecks to Real-Time Decision Intelligence (2026 Edition)
Dev.to
When AI Grows Up: Identity, Memory, and What Persists Across Versions
Dev.to
AI-Driven Reporting 2.0: From Manual Bottlenecks to Real-Time Decision Intelligence (2026 Edition)
Dev.to
AI Agents Are Already Breaking Things — And We've Barely Started
Dev.to
The best AI investment might be in energy tech
TechCrunch