AI Navigate

SAVeS: Steering Safety Judgments in Vision-Language Models via Semantic Cues

arXiv cs.CL / 3/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Vision-language models' safety judgments are highly influenced by semantic cues rather than grounded visual understanding.
  • The authors introduce a semantic steering framework that uses controlled textual, visual, and cognitive interventions without changing the underlying scene content.
  • SAVeS, a new benchmark, along with an evaluation protocol, separates behavioral refusals, grounded safety reasoning, and false refusals to assess the impact of semantic cues.
  • Experiments across multiple VLMs show safety decisions rely on learned visual-linguistic associations, and automated steering pipelines can exploit these vulnerabilities.

Abstract

Vision-language models (VLMs) are increasingly deployed in real-world and embodied settings where safety decisions depend on visual context. However, it remains unclear which visual evidence drives these judgments. We study whether multimodal safety behavior in VLMs can be steered by simple semantic cues. We introduce a semantic steering framework that applies controlled textual, visual, and cognitive interventions without changing the underlying scene content. To evaluate these effects, we propose SAVeS, a benchmark for situational safety under semantic cues, together with an evaluation protocol that separates behavioral refusal, grounded safety reasoning, and false refusals. Experiments across multiple VLMs and an additional state-of-the-art benchmark show that safety decisions are highly sensitive to semantic cues, indicating reliance on learned visual-linguistic associations rather than grounded visual understanding. We further demonstrate that automated steering pipelines can exploit these mechanisms, highlighting a potential vulnerability in multimodal safety systems.