MuDD: A Multimodal Deception Detection Dataset and GSR-Guided Progressive Distillation for Non-Contact Deception Detection
arXiv cs.AI / 3/30/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces MuDD, a large-scale non-contact deception detection dataset with multimodal recordings (video, audio, GSR) from 130 participants over 690 minutes, aimed at enabling more reliable cross-subject learning.
- MuDD also includes additional physiological signals (photoplethysmography, heart rate) and personality traits, expanding the dataset’s usefulness for broader deception-related research.
- To address modality mismatch between contact-based GSR and non-contact signals, the authors propose GSR-guided Progressive Distillation (GPD) using cross-modal knowledge distillation.
- GPD combines progressive feature-level and digit-level distillation with dynamic routing so the model can adaptively decide which teacher knowledge to transfer during training.
- Experiments reportedly show GPD improves performance over prior methods and achieves state-of-the-art results on deception detection and concealed-digit identification.
Related Articles

Black Hat Asia
AI Business

Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer
Simon Willison's Blog
Beyond the Chatbot: Engineering Multi-Agent Ecosystems in 2026
Dev.to

I missed the "fun" part in software development
Dev.to

The Billion Dollar Tax on AI Agents
Dev.to