GameplayQA: A Benchmarking Framework for Decision-Dense POV-Synced Multi-Video Understanding of 3D Virtual Agents

arXiv cs.CL / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces GameplayQA, a benchmarking framework for evaluating how multimodal/agentic models perceive and reason over decision-dense, first-person, multi-video 3D gameplay involving multiple agents.
  • It provides dense, time-synchronized annotations (at 1.22 labels/second) using a triadic decomposition of Self, Other Agents, and the World, and derives 2.4K diagnostic QA pairs across three cognitive-complexity levels.
  • The benchmark includes a distractor taxonomy designed to pinpoint specific hallucination modes, enabling more fine-grained error analysis than prior benchmarks.
  • Experiments with frontier MLLMs show a substantial performance gap versus humans, especially on temporal and cross-video grounding, agent-role attribution, and coping with high “decision density.”

Abstract

Multimodal LLMs are increasingly deployed as perceptual backbones for autonomous agents in 3D environments, from robotics to virtual worlds. These applications require agents to perceive rapid state changes, attribute actions to the correct entities, and reason about concurrent multi-agent behaviors from a first-person perspective, capabilities that existing benchmarks do not adequately evaluate. We introduce GameplayQA, a framework for evaluating agentic-centric perception and reasoning through video understanding. Specifically, we densely annotate multiplayer 3D gameplay videos at 1.22 labels/second, with time-synced, concurrent captions of states, actions, and events structured around a triadic system of Self, Other Agents, and the World, a natural decomposition for multi-agent environments. From these annotations, we refined 2.4K diagnostic QA pairs organized into three levels of cognitive complexity, accompanied by a structured distractor taxonomy that enables fine-grained analysis of where models hallucinate. Evaluation of frontier MLLMs reveals a substantial gap from human performance, with common failures in temporal and cross-video grounding, agent-role attribution, and handling the decision density of the game. We hope GameplayQA stimulates future research at the intersection of embodied AI, agentic perception, and world modeling.