AI Navigate

OOD-MMSafe: 有害な意図から隠れた結果へのMLLM安全性の進展

arXiv cs.AI / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • OOD-MMSafeは、文脈依存の因果連鎖における潜在的な危険を識別するマルチモーダル大規模言語モデル(MLLM)の評価のために、455の精査されたクエリ画像ペアを用いた新しいベンチマークを導入し、安全性の焦点を悪意ある意図から結果に基づく安全性へと移行しています。
  • 研究により、主要なMLLMで大きな因果的盲点が明らかになり、高容量のクローズドソースモデルで最大67.5%の失敗率が確認されており、現在の静的アラインメント手法が安全性の推論ではなくフォーマットに焦点を当てている限界を露呈しています。
  • これらの課題を克服するために、著者らはトークンレベルの自己蒸留報酬にモデルの内在的推論を利用する枠組みであるConsequence-Aware Safety Policy Optimization(CASPO)を提案し、リスク識別性能を劇的に向上させました。
  • 実験結果では、CASPOがQwen3-VL-4Bモデルで失敗率を最小5.7%まで減少させつつ、モデル全体の有効性を維持し、安全な自律型および具現化エージェントの展開における重要な前進を示しています。

Computer Science > Artificial Intelligence

arXiv:2603.09706 (cs)
[Submitted on 10 Mar 2026]

Title:OOD-MMSafe: Advancing MLLM Safety from Harmful Intent to Hidden Consequences

View a PDF of the paper titled OOD-MMSafe: Advancing MLLM Safety from Harmful Intent to Hidden Consequences, by Ming Wen and 6 other authors
View PDF HTML (experimental)
Abstract:While safety alignment for Multimodal Large Language Models (MLLMs) has gained significant attention, current paradigms primarily target malicious intent or situational violations. We propose shifting the safety frontier toward consequence-driven safety, a paradigm essential for the robust deployment of autonomous and embodied agents. To formalize this shift, we introduce OOD-MMSafe, a benchmark comprising 455 curated query-image pairs designed to evaluate a model's ability to identify latent hazards within context-dependent causal chains. Our analysis reveals a pervasive causal blindness among frontier models, with the highest 67.5% failure rate in high-capacity closed-source models, and identifies a preference ceiling where static alignment yields format-centric failures rather than improved safety reasoning as model capacity grows. To address these bottlenecks, we develop the Consequence-Aware Safety Policy Optimization (CASPO) framework, which integrates the model's intrinsic reasoning as a dynamic reference for token-level self-distillation rewards. Experimental results demonstrate that CASPO significantly enhances consequence projection, reducing the failure ratio of risk identification to 7.3% for Qwen2.5-VL-7B and 5.7% for Qwen3-VL-4B while maintaining overall effectiveness.
Comments:
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09706 [cs.AI]
  (or arXiv:2603.09706v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09706
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Ming Wen [view email]
[v1] Tue, 10 Mar 2026 14:16:43 UTC (6,095 KB)
Full-text links:

Access Paper:

Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.