Introducing WARM-VR: Benchmark Dataset for Multimodal Wearable Affect Recognition in Virtual Reality
arXiv cs.LG / 5/4/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The paper introduces WARM-VR, a new publicly available multimodal benchmark dataset for wearable affect recognition specifically targeting immersive VR settings.
- Data were collected from 31 participants using wearable sensors (wristband: BVP, EDA, skin temperature, acceleration; chest strap: ECG) while they experienced VR sessions that elicited stress then relaxation.
- The VR experience included synchronized multimodal stimuli—visual, auditory, and olfactory—to study how multisensory cues affect emotional state changes.
- Subjective questionnaire results indicate VR relaxation significantly reduces negative affect, with olfactory enhancement showing particular benefit.
- Baseline experiments establish performance benchmarks: CNN/CNN-Bi-GRU on BVP for valence (best average F1 0.63, AUC 0.69), lightweight Transformers for arousal, and CNN-Bi-GRU achieving top results for the relaxation task (average F1 0.64, AUC 0.69).
Related Articles

ALM on Power Platform: ADO + GitHub, the best of both worlds
Dev.to

Iron Will, Iron Problems: Kiwi-chan's Mining Misadventures! 🥝⛏️
Dev.to
Experiment: Does repeated usage influence ChatGPT 5.4 outputs in a RAG-like setup?
Dev.to
Open source models are going to be the future on Cursor, OpenCode etc.
Reddit r/LocalLLaMA
Claude Desktop + NFTs: MCP Tools for AI Agent NFT Management
Dev.to