How Far Can VLMs Go for Visual Bug Detection? Studying 19,738 Keyframes from 41 Hours of Gameplay Videos

arXiv cs.CV / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper evaluates off-the-shelf vision-language models (VLMs) for visual bug detection on real industrial gameplay QA footage by sampling 19,738 keyframes from 41 hours across 100 videos.
  • Using a single-prompt baseline, the VLM achieves precision of 0.50 and accuracy of 0.72 for determining whether a keyframe contains a bug.
  • Two no-fine-tuning enhancement methods—(1) a secondary judge model and (2) metadata-augmented prompting via retrieval of prior bug reports—only yield marginal gains.
  • The enhancement strategies increase computational cost and can raise output variance, suggesting a limited benefit from prompt/judge-only approaches in this setting.
  • The authors conclude that VLMs can already catch some visual bugs in QA videos, but meaningful further progress likely needs hybrid methods that better split textual reasoning from visual anomaly detection.

Abstract

Video-based quality assurance (QA) for long-form gameplay video is labor-intensive and error-prone, yet valuable for assessing game stability and visual correctness over extended play sessions. Vision language models (VLMs) promise general-purpose visual reasoning capabilities and thus appear attractive for detecting visual bugs directly from video frames. Recent benchmarks suggest that VLMs can achieve promising results in detecting visual glitches on curated datasets. Building on these findings, we conduct a real-world study using industrial QA gameplay videos to evaluate how well VLMs perform in practical scenarios. Our study samples keyframes from long gameplay videos and asks a VLM whether each keyframe contains a bug. Starting from a single-prompt baseline, the model achieves a precision of 0.50 and an accuracy of 0.72. We then examine two common enhancement strategies used to improve VLM performance without fine-tuning: (1) a secondary judge model that re-evaluates VLM outputs, and (2) metadata-augmented prompting through the retrieval of prior bug reports. Across \textbf{100 videos} totaling \textbf{41 hours} and \textbf{19,738 keyframes}, these strategies provide only marginal improvements over the simple baseline, while introducing additional computational cost and output variance. Our findings indicate that off-the-shelf VLMs are already capable of detecting a certain range of visual bugs in QA gameplay videos, but further progress likely requires hybrid approaches that better separate textual and visual anomaly detection.