UniSAFE: A Comprehensive Benchmark for Safety Evaluation of Unified Multimodal Models
arXiv cs.CV / 3/19/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- UniSAFE is the first comprehensive benchmark for system-level safety evaluation of Unified Multimodal Models (UMMs) across 7 I/O modality combinations, addressing fragmentation in existing safety benchmarks.
- The benchmark comprises 6,802 curated instances and is used to evaluate 15 state-of-the-art UMMs, including both proprietary and open-source models.
- Findings reveal vulnerabilities across current UMMs, with elevated safety violations in multi-image composition and multi-turn settings, and image-output tasks being more vulnerable than text-output tasks.
- The work underscores the need for stronger system-level safety alignment for UMMs and publicly provides the code and data at the project's GitHub repository.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
The Demethylation
Dev.to
[P] Vibecoded on a home PC: building a ~2700 Elo browser-playable neural chess engine with a Karpathy-inspired AI-assisted research loop
Reddit r/MachineLearning
Meet DuckLLM 1.0 My First Model!
Reddit r/LocalLLaMA

95% of UK students now use AI and their experiences couldn't be more divided
THE DECODER