Human Centered Non Intrusive Driver State Modeling Using Personalized Physiological Signals in Real World Automated Driving

arXiv cs.RO / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether non-intrusive, personalized driver state modeling can improve monitoring for SAE Level 2–3 automated driving, where the driver must supervise and respond to take-over requests.
  • Using an Empatica E4 wearable, the study collects multimodal physiological signals (electrodermal activity, heart rate, temperature, and motion) during real-world automated driving experiments.
  • The authors convert physiological signals into 2D representations and apply a multimodal deep learning approach built on pre-trained ResNet50 feature extractors to infer driver awareness/state.
  • Across four drivers, results show large inter-person physiological variability, with personalized models reaching an average accuracy of 92.68% versus 54% for generalized cross-user models.
  • The findings argue that future driver monitoring systems should be adaptive to individual physiological profiles rather than relying on generalized models that may underperform across users.

Abstract

In vehicles with partial or conditional driving automation (SAE Levels 2-3), the driver remains responsible for supervising the system and responding to take-over requests. Therefore, reliable driver monitoring is essential for safe human-automation collaboration. However, most existing Driver Monitoring Systems rely on generalized models that ignore individual physiological variability. In this study, we examine the feasibility of personalized driver state modeling using non-intrusive physiological sensing during real-world automated driving. We conducted experiments in an SAE Level 2 vehicle using an Empatica E4 wearable sensor to capture multimodal physiological signals, including electrodermal activity, heart rate, temperature, and motion data. To leverage deep learning architectures designed for images, we transformed the physiological signals into two-dimensional representations and processed them using a multimodal architecture based on pre-trained ResNet50 feature extractors. Experiments across four drivers demonstrate substantial interindividual variability in physiological patterns related to driver awareness. Personalized models achieved an average accuracy of 92.68%, whereas generalized models trained on multiple users dropped to an accuracy of 54%, revealing substantial limitations in cross-user generalization. These results underscore the necessity of adaptive, personalized driver monitoring systems for future automated vehicles and imply that autonomous systems should adapt to each driver's unique physiological profile.