NeuroLip: An Event-driven Spatiotemporal Learning Framework for Cross-Scene Lip-Motion-based Visual Speaker Recognition

arXiv cs.AI / 4/20/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper introduces NeuroLip, an event-driven spatiotemporal learning framework for cross-scene visual speaker recognition that relies on lip-motion biomarkers when audio is unavailable.
  • Unlike appearance-dependent approaches, NeuroLip encodes stable, subject-specific articulation dynamics, using an event-based camera to mitigate motion blur and limited dynamic range from frame-based sensors.
  • The method uses three core components: a temporal-aware voxel encoding with adaptive event weighting, a structure-aware spatial enhancer to suppress noise while preserving vertically structured motion cues, and a polarity consistency regularization to retain motion-direction information.
  • The authors also release DVSpeaker, an event-based lip-motion dataset covering 50 subjects across four viewpoint/illumination conditions, and report strong results including >71% accuracy on unseen viewpoints and nearly 76% in low light, improving prior methods by at least 8.54%.
  • NeuroLip’s training and evaluation follow a strict cross-scene protocol (single-condition training, cross-scene generalization), with the dataset and code made publicly available on GitHub.

Abstract

Visual speaker recognition based on lip motion offers a silent, hands-free, and behavior-driven biometric solution that remains effective even when acoustic cues are unavailable. Compared to traditional methods that rely heavily on appearance-dependent representations, lip motion encodes subject-specific behavioral dynamics driven by consistent articulation patterns and muscle coordination, offering inherent stability across environmental changes. However, capturing these robust, fine-grained dynamics is challenging for conventional frame-based cameras due to motion blur and low dynamic range. To exploit the intrinsic stability of lip motion and address these sensing limitations, we propose NeuroLip, an event-based framework that captures fine-grained lip dynamics under a strict yet practical cross-scene protocol: training is performed under a single controlled condition, while recognition must generalize to unseen viewing and lighting conditions. NeuroLip features a 1) Temporal-aware Voxel Encoding module with adaptive event weighting, 2) Structure-aware Spatial Enhancer that amplifies discriminative behavioral patterns by suppressing noise while preserving vertically structured motion information, and 3) Polarity Consistency Regularization mechanism to retain motion-direction cues encoded in event polarities. To facilitate systematic evaluation, we introduce DVSpeaker, a comprehensive event-based lip-motion dataset comprising 50 subjects recorded under four distinct viewpoint and illumination scenarios. Extensive experiments demonstrate that NeuroLip achieves near-perfect matched-scene accuracy and robust cross-scene generalization, attaining over 71% accuracy on unseen viewpoints and nearly 76% under low-light conditions, outperforming representative existing methods by at least 8.54%. The dataset and code are publicly available at https://github.com/JiuZeongit/NeuroLip.