HarassGuard: Detecting Harassment Behaviors in Social Virtual Reality with Vision-Language Models
arXiv cs.CV / 4/2/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces HarassGuard, a vision-language model–based system for detecting physical harassment behaviors in social VR using only visual input to reduce privacy risks from biometric data.
- It reports the creation of an IRB-approved harassment vision dataset and describes fine-tuning VLMs with prompt engineering and contextual information to improve detection in social VR scenes.
- Experimental results show competitive performance against traditional baselines (LSTM/CNN and Transformer), with up to 88.09% accuracy for binary classification and 68.85% for multi-class classification.
- The authors claim HarassGuard can achieve similar baseline performance with substantially fewer fine-tuning samples (200 vs. 1,115), highlighting improved data efficiency and contextual reasoning benefits.
Related Articles

Black Hat Asia
AI Business

I Audited 30+ Small Businesses on Their AI Visibility. Here's What Most Are Getting Wrong.
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Один промпт заменил мне 3 часа работы с текстами в день
Dev.to

Building an AI that analyzes stocks like Warren Buffett
Dev.to