GazeOnce360: Fisheye-Based 360{\deg} Multi-Person Gaze Estimation with Global-Local Feature Fusion
arXiv cs.CV / 3/19/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- GazeOnce360 introduces an end-to-end multi-person gaze estimation model that uses a single upward-facing fisheye camera mounted on a table to cover a 360-degree scene.
- The approach addresses fisheye distortion and perspective variation with rotational convolutions and explicit eye landmark supervision.
- It also proposes MPSGaze360, a large-scale synthetic dataset rendered in Unreal Engine with diverse multi-person configurations and precise 3D gaze and eye landmark annotations.
- A dual-resolution architecture fuses global low-resolution context with high-resolution local eye regions to capture fine-grained eye features.
- Experimental results demonstrate the effectiveness of the components and provide a project page for further details.
Related Articles
Day 10: 230 Sessions of Hustle and It Comes Down to One Person Reading a Document
Dev.to
Two bots, one confused server: what Nimbus revealed about AI agent identity
Dev.to
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark forFinance
Dev.to
A Coding Implementation to Build an Uncertainty-Aware LLM System with Confidence Estimation, Self-Evaluation, and Automatic Web Research
MarkTechPost
DNA Memory: Making AI Agents Learn, Forget, and Evolve Like a Human Brain
Dev.to