Eye Gaze-Informed and Context-Aware Pedestrian Trajectory Prediction in Shared Spaces with Automated Shuttles: A Virtual Reality Study
arXiv cs.LG / 3/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper presents a VR study showing how pedestrians interact with automated shuttles in shared urban spaces across varying approach angles and continuous traffic scenarios.
- It introduces GazeX-LSTM, a multimodal model that fuses pedestrians' trajectories, fine-grained eye gaze dynamics, and contextual factors to predict pedestrian behavior.
- The results demonstrate that eye gaze information provides predictive power beyond head orientation alone, and when combined with contextual information yields super-additive improvements in prediction accuracy.
- The findings advocate for eye gaze-informed and context-aware modeling to enable safer and more adaptive automated vehicle technologies in complex shared spaces.
Related Articles
GDPR and AI Training Data: What You Need to Know Before Training on Personal Data
Dev.to
Edge-to-Cloud Swarm Coordination for heritage language revitalization programs with embodied agent feedback loops
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
AI Crawler Management: The Definitive Guide to robots.txt for AI Bots
Dev.to
Data Sovereignty Rules and Enterprise AI
Dev.to