AI Navigate

OOD-MMSafe: Advancing MLLM Safety from Harmful Intent to Hidden Consequences

arXiv cs.AI / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • OOD-MMSafe introduces a new benchmark with 455 curated query-image pairs to evaluate Multimodal Large Language Models (MLLMs) on identifying latent hazards in context-dependent causal chains, moving safety focus beyond malicious intent to consequence-driven safety.
  • The study uncovers significant causal blindness in leading MLLMs, with failure rates up to 67.5% for high-capacity closed-source models, exposing limitations in current static alignment approaches which focus on format rather than safety reasoning.
  • To overcome these challenges, the authors propose Consequence-Aware Safety Policy Optimization (CASPO), a framework using intrinsic model reasoning for token-level self-distillation rewards, which drastically improves risk identification performance.
  • Experimental results show CASPO reduces failure ratios to as low as 5.7% in Qwen3-VL-4B models while maintaining overall model effectiveness, marking a significant step forward in deploying safer autonomous and embodied agents.

Computer Science > Artificial Intelligence

arXiv:2603.09706 (cs)
[Submitted on 10 Mar 2026]

Title:OOD-MMSafe: Advancing MLLM Safety from Harmful Intent to Hidden Consequences

View a PDF of the paper titled OOD-MMSafe: Advancing MLLM Safety from Harmful Intent to Hidden Consequences, by Ming Wen and 6 other authors
View PDF HTML (experimental)
Abstract:While safety alignment for Multimodal Large Language Models (MLLMs) has gained significant attention, current paradigms primarily target malicious intent or situational violations. We propose shifting the safety frontier toward consequence-driven safety, a paradigm essential for the robust deployment of autonomous and embodied agents. To formalize this shift, we introduce OOD-MMSafe, a benchmark comprising 455 curated query-image pairs designed to evaluate a model's ability to identify latent hazards within context-dependent causal chains. Our analysis reveals a pervasive causal blindness among frontier models, with the highest 67.5% failure rate in high-capacity closed-source models, and identifies a preference ceiling where static alignment yields format-centric failures rather than improved safety reasoning as model capacity grows. To address these bottlenecks, we develop the Consequence-Aware Safety Policy Optimization (CASPO) framework, which integrates the model's intrinsic reasoning as a dynamic reference for token-level self-distillation rewards. Experimental results demonstrate that CASPO significantly enhances consequence projection, reducing the failure ratio of risk identification to 7.3% for Qwen2.5-VL-7B and 5.7% for Qwen3-VL-4B while maintaining overall effectiveness.
Comments:
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09706 [cs.AI]
  (or arXiv:2603.09706v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09706
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Ming Wen [view email]
[v1] Tue, 10 Mar 2026 14:16:43 UTC (6,095 KB)
Full-text links:

Access Paper:

Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.