Automated Detection of Mutual Gaze and Joint Attention in Dual-Camera Settings via Dual-Stream Transformers
arXiv cs.CV / 5/1/2026
📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research
Key Points
- The paper proposes an efficient dual-stream Transformer model to automatically detect mutual gaze and joint attention from synchronized dual-camera recordings in multi-camera lab settings.
- It builds on frozen gaze-aware backbones (GazeLLE) to capture strong visual priors and uses a custom token-fusion mechanism to model spatial and semantic relations between interacting subjects.
- Experiments on an ecologically valid caregiver–infant interaction dataset show the method performs well and significantly better than both a convolutional baseline and a state-of-the-art multimodal LLM.
- The authors open-source the model and pre-trained weights to enable behavioral scientists to fine-tune the system for different laboratory environments, reducing reliance on labor-intensive manual coding.
- Overall, the work bridges computational modeling and applied interaction research by offering a scalable pipeline for behavioral measurement.
Related Articles

Black Hat USA
AI Business

Why Autonomous Coding Agents Keep Failing — And What Actually Works
Dev.to

Text-to-image is easy. Chaining LLMs to generate, critique, and iterate on images autonomously is a routing nightmare. AgentSwarms now supports Image generation playground and creative media workflows!
Reddit r/artificial

Announcing the NVIDIA Nemotron 3 Super Build Contest
Dev.to

75% of Sites Blocking AI Bots Still Get Cited. Here Is Why Blocking Does Not Work.
Dev.to