MuDD: A Multimodal Deception Detection Dataset and GSR-Guided Progressive Distillation for Non-Contact Deception Detection

arXiv cs.AI / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MuDD, a large-scale non-contact deception detection dataset with multimodal recordings (video, audio, GSR) from 130 participants over 690 minutes, aimed at enabling more reliable cross-subject learning.
  • MuDD also includes additional physiological signals (photoplethysmography, heart rate) and personality traits, expanding the dataset’s usefulness for broader deception-related research.
  • To address modality mismatch between contact-based GSR and non-contact signals, the authors propose GSR-guided Progressive Distillation (GPD) using cross-modal knowledge distillation.
  • GPD combines progressive feature-level and digit-level distillation with dynamic routing so the model can adaptively decide which teacher knowledge to transfer during training.
  • Experiments reportedly show GPD improves performance over prior methods and achieves state-of-the-art results on deception detection and concealed-digit identification.

Abstract

Non-contact automatic deception detection remains challenging because visual and auditory deception cues often lack stable cross-subject patterns. In contrast, galvanic skin response (GSR) provides more reliable physiological cues and has been widely used in contact-based deception detection. In this work, we leverage stable deception-related knowledge in GSR to guide representation learning in non-contact modalities through cross-modal knowledge distillation. A key obstacle, however, is the lack of a suitable dataset for this setting. To address this, we introduce MuDD, a large-scale Multimodal Deception Detection dataset containing recordings from 130 participants over 690 minutes. In addition to video, audio, and GSR, MuDD also provides Photoplethysmography, heart rate, and personality traits, supporting broader scientific studies of deception. Based on this dataset, we propose GSR-guided Progressive Distillation (GPD), a cross-modal distillation framework for mitigating the negative transfer caused by the large modality mismatch between GSR and non-contact signals. The core innovation of GPD is the integration of progressive feature-level and digit-level distillation with dynamic routing, which allows the model to adaptively determine how teacher knowledge should be transferred during training, leading to more stable cross-modal knowledge transfer. Extensive experiments and visualizations show that GPD outperforms existing methods and achieves state-of-the-art performance on both deception detection and concealed-digit identification.