Robust Multimodal Safety via Conditional Decoding

arXiv cs.AI / 4/2/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multimodal LLMs (MLLMs) can lose safety alignment when harmful queries exploit cross-modal interactions, with text-only alignment becoming less effective when additional modalities are added.
  • It introduces CASA (Classification Augmented with Safety Attention), a conditional decoding approach that predicts a binary safety token using internal model representations before generating a response.
  • CASA adds a safety attention module to improve detection of malicious queries while avoiding external classifiers, auxiliary heads, and modality-specific safety fine-tuning.
  • Experiments on benchmarks including MM-SafetyBench, JailbreakV-28k, and adversarial audio tests show CASA reduces average attack success rates by over 97% across modalities and attack types.
  • The method preserves strong performance on benign inputs, with both automated evaluation and human assessment by 13 trained annotators supporting its utility–safety tradeoff.

Abstract

Multimodal large-language models (MLLMs) often experience degraded safety alignment when harmful queries exploit cross-modal interactions. Models aligned on text alone show a higher rate of successful attacks when extended to two or more modalities. In this work, we propose a simple conditional decoding strategy, CASA (Classification Augmented with Safety Attention) that utilizes internal representations of MLLMs to predict a binary safety token before response generation. We introduce a novel safety attention module designed to enhance the model's ability to detect malicious queries. Our design ensures robust safety alignment without relying on any external classifier or auxiliary head, and without the need for modality-specific safety fine-tuning. On diverse benchmarks such as MM-SafetyBench, JailbreakV-28k, and adversarial audio tests, CASA lowers the average attack success rate by more than 97% across modalities and across attack types. Our empirical evaluations also show that CASA maintains strong utility in benign inputs, a result validated through both automated and human evaluations (via 13 trained annotators). Together, these results highlight CASA as a simple and generalizable framework to improve multimodal LLM safety.