Does Unification Come at a Cost? Uni-SafeBench: A Safety Benchmark for Unified Multimodal Large Models

arXiv cs.AI / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that Unified Multimodal Large Models (UMLMs) improve performance through architectural unification, but create underexplored safety risks compared with benchmarks that treat understanding and generation separately.
  • It introduces Uni-SafeBench, a safety evaluation benchmark covering six major safety categories across seven task types, designed to test holistic safety under unified multimodal modeling.
  • The authors also propose Uni-Judger to separate contextual safety effects from intrinsic safety, enabling more rigorous assessment of what the unified model itself can do.
  • Findings from evaluations show that unification increases capabilities while substantially degrading the inherent safety of the underlying LLM, and that open-source UMLMs perform worse on safety than specialized multimodal models focused on either generation or understanding.
  • The work releases the benchmark and resources to help researchers systematically expose these risks and support safer AGI development.

Abstract

Unified Multimodal Large Models (UMLMs) integrate understanding and generation capabilities within a single architecture. While this architectural unification, driven by the deep fusion of multimodal features, enhances model performance, it also introduces important yet underexplored safety challenges. Existing safety benchmarks predominantly focus on isolated understanding or generation tasks, failing to evaluate the holistic safety of UMLMs when handling diverse tasks under a unified framework. To address this, we introduce Uni-SafeBench, a comprehensive benchmark featuring a taxonomy of six major safety categories across seven task types. To ensure rigorous assessment, we develop Uni-Judger, a framework that effectively decouples contextual safety from intrinsic safety. Based on comprehensive evaluations across Uni-SafeBench, we uncover that while the unification process enhances model capabilities, it significantly degrades the inherent safety of the underlying LLM. Furthermore, open-source UMLMs exhibit much lower safety performance than multimodal large models specialized for either generation or understanding tasks. We open-source all resources to systematically expose these risks and foster safer AGI development.