MVRD-Bench: Multi-View Learning and Benchmarking for Dynamic Remote Photoplethysmography under Occlusion
arXiv cs.CV / 3/25/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces MVRD-Bench (MVRD), a multi-view remote photoplethysmography dataset with synchronized facial videos from three viewpoints designed to reflect real-world motion, speaking, and occlusion conditions.
- It proposes MVRD-rPPG, a unified multi-view learning framework that fuses complementary cues to improve robustness when facial skin coverage is partially lost due to motion-induced occlusion.
- The framework includes motion artifact suppression (ATOC), feature disentanglement via a Rhythm-Visual Dual-Stream Network, and view-wise aggregation using Multi-View Correlation-Aware Attention (MVCA).
- It adds a Correlation Frequency Adversarial (CFA) training strategy to jointly enforce temporal accuracy, spectral consistency, and perceptual realism of the estimated physiological signals.
- Experiments on the MVRD dataset show strong performance under movement, including MAE of 0.90 and Pearson correlation R of 0.99, and the authors state that code and the dataset will be released.
Related Articles
The Security Gap in MCP Tool Servers (And What I Built to Fix It)
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
I made a new programming language to get better coding with less tokens.
Dev.to
RSA Conference 2026: The Week Vibe Coding Security Became Impossible to Ignore
Dev.to

Adversarial AI framework reveals mechanisms behind impaired consciousness and a potential therapy
Reddit r/artificial