TSHA: A Benchmark for Visual Language Models in Trustworthy Safety Hazard Assessment Scenarios

arXiv cs.CV / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes TSHA (Trustworthy Safety Hazards Assessment), a new benchmark for evaluating vision-language models (VLMs) on indoor safety hazard assessment scenarios.
  • It addresses limitations of prior benchmarks by reducing the synthetic-to-real domain gap, expanding safety tasks beyond oversimplified constraints, and introducing more rigorous evaluation protocols.
  • TSHA includes 81,809 curated training samples sourced from existing indoor datasets, internet images, AIGC images, and newly captured images to better reflect real environments.
  • The benchmark’s challenging test set (1,707 samples) contains videos and panoramic images with multiple simultaneous hazards to measure robustness in complex home safety contexts.
  • Experiments across 23 VLMs show current models perform poorly on safety hazard assessment, while training on TSHA improves results by up to +18.3 points and boosts generalizability on other benchmarks.

Abstract

Recent advances in vision-language models (VLMs) have accelerated their application to indoor safety hazards assessment. However, existing benchmarks suffer from three fundamental limitations: (1) heavy reliance on synthetic datasets constructed via simulation software, creating a significant domain gap with real-world environments; (2) oversimplified safety tasks with artificial constraints on hazard and scene types, thereby limiting model generalization; and (3) absence of rigorous evaluation protocols to thoroughly assess model capabilities in complex home safety scenarios. To address these challenges, we introduce TSHA (\textbf{T}rustworthy \textbf{S}afety \textbf{H}azards \textbf{A}ssessment), a comprehensive benchmark comprising 81,809 carefully curated training samples drawn from four complementary sources: existing indoor datasets, internet images, AIGC images, and newly captured images. This benchmark set also includes a highly challenging test set with 1707 samples, comprising not only a carefully selected subset from the training distribution but also newly added videos and panoramic images containing multiple safety hazards, used to evaluate the model's robustness in complex safety scenarios. Extensive experiments on 23 popular VLMs demonstrate that current VLMs lack robust capabilities for safety hazard assessment. Importantly, models trained on the TSHA training set not only achieve a significant performance improvement of up to +18.3 points on the TSHA test set but also exhibit enhanced generalizability across other benchmarks, underscoring the substantial contribution and importance of the TSHA benchmark.