Do Vision Language Models Understand Human Engagement in Games?
arXiv cs.CV / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper evaluates three vision–language models on the GameVibe Few‑Shot dataset across nine first‑person shooter games to assess whether visual cues alone can infer human engagement.
- Zero‑shot predictions from VLMs are generally weak and often do not outperform simple per‑game majority‑class baselines; retrieval‑augmented prompting can improve pointwise engagement predictions in some settings.
- Pairwise engagement change prediction remains consistently difficult across strategies, and theory‑guided prompting does not reliably help and may reinforce surface‑level shortcuts.
- The findings suggest a perception–understanding gap in current VLMs: they can recognize visible gameplay cues but struggle to robustly infer human engagement across games.
Related Articles

Check out this article on AI-Driven Reporting 2.0: From Manual Bottlenecks to Real-Time Decision Intelligence (2026 Edition)
Dev.to

SYNCAI
Dev.to
How AI-Powered Decision Making is Reshaping Enterprise Strategy in 2024
Dev.to
When AI Grows Up: Identity, Memory, and What Persists Across Versions
Dev.to
AI-Driven Reporting 2.0: From Manual Bottlenecks to Real-Time Decision Intelligence (2026 Edition)
Dev.to