Estimating Central, Peripheral, and Temporal Visual Contributions to Human Decision Making in Atari Games
arXiv cs.LG / 4/7/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates how central (gaze-focused), peripheral, and temporal (past-state) visual information sources each contribute to human decision-making in dynamic Atari environments.
- Using the Atari-HEAD dataset with synchronized eye-tracking, the authors apply a controlled ablation framework and train action-prediction networks under different combinations of included/excluded information sources.
- Results across 20 Atari games show peripheral visual information is the dominant contributor, causing the largest median action-prediction accuracy drops (about 35–44%) when removed.
- Gaze-derived information produces smaller accuracy reductions (~2–3%), while past-state information has a wider effect range (~2–16%), with higher impact likely linked to reduced “peripheral-information leakage.”
- By clustering states using model-predicted action probabilities, the analysis identifies behavioral regimes such as focus-dominated and periphery-dominated decisions, and proposes a general method for estimating contributions of visual information sources from behavior.
Related Articles

Black Hat Asia
AI Business

OpenAI's pricing is about to change — here's why local AI matters more than ever
Dev.to

Google AI Tells Users to Put Glue on Their Pizza!
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Could it be that this take is not too far fetched?
Reddit r/LocalLLaMA