HazardArena: Evaluating Semantic Safety in Vision-Language-Action Models

arXiv cs.RO / 4/15/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Vision-Language-Action (VLA) models can execute actions correctly while still producing unsafe outcomes because action policies are not tightly coupled with visual-linguistic semantics during evaluation.
  • The paper introduces HazardArena, a new benchmark built from matched “safe/unsafe twin” scenarios to isolate semantic risk, and includes 2,000+ assets, 40 risk-sensitive tasks, and 7 risk categories aligned with robotic safety standards.
  • Experiments show that models trained only on safe scenarios frequently fail when tested on semantically corresponding unsafe variants, revealing a systematic semantic-safety vulnerability.
  • To address the issue without retraining, the authors propose a training-free Safety Option Layer that constrains execution using semantic attributes or a vision-language judge, reducing unsafe behavior with minimal impact on task performance.
  • The work argues that evaluating and enforcing semantic safety must be revisited as VLAs scale toward real-world deployment, not just measuring action success rates.

Abstract

Vision-Language-Action (VLA) models inherit rich world knowledge from vision-language backbones and acquire executable skills via action demonstrations. However, existing evaluations largely focus on action execution success, leaving action policies loosely coupled with visual-linguistic semantics. This decoupling exposes a systematic vulnerability whereby correct action execution may induce unsafe outcomes under semantic risk. To expose this vulnerability, we introduce HazardArena, a benchmark designed to evaluate semantic safety in VLAs under controlled yet risk-bearing contexts. HazardArena is constructed from safe/unsafe twin scenarios that share matched objects, layouts, and action requirements, differing only in the semantic context that determines whether an action is unsafe. We find that VLA models trained exclusively on safe scenarios often fail to behave safely when evaluated in their corresponding unsafe counterparts. HazardArena includes over 2,000 assets and 40 risk-sensitive tasks spanning 7 real-world risk categories grounded in established robotic safety standards. To mitigate this vulnerability, we propose a training-free Safety Option Layer that constrains action execution using semantic attributes or a vision-language judge, substantially reducing unsafe behaviors with minimal impact on task performance. We hope that HazardArena highlights the need to rethink how semantic safety is evaluated and enforced in VLAs as they scale toward real-world deployment.