Beyond Screenshots: Evaluating VLMs' Understanding of UI Animations

arXiv cs.CL / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that AI agents operating UI must understand not only static layout but also how animations convey state and feedback for reliable action.
  • It introduces AniMINT, a new dataset of 300 densely annotated UI animation videos, designed to fill the gap left by prior VLM studies focused mainly on screenshots.
  • The authors evaluate state-of-the-art VLMs on multiple abilities: perceiving animation effects, identifying the purpose of animations, and interpreting their meaning.
  • Results indicate VLMs can reliably detect basic (primitive) motion, but struggle with higher-level interpretation compared with human performance.
  • Using MCPC (Motion, Context, and Perceptual Cues), the study analyzes what factors limit VLM performance and outlines directions for future improvements.

Abstract

AI agents operating on user interfaces must understand how interfaces communicate state and feedback to act reliably. As a core communicative modality, animations are increasingly used in modern interfaces, serving critical functional purposes beyond mere aesthetics. Thus, understanding UI animation is essential for comprehensive interface interpretation. However, recent studies of Vision Language Models (VLMs) for UI understanding have focused primarily on static screenshots, leaving it unclear how well these models handle dynamic UI animations. To address this gap, we created AniMINT, a novel dataset of 300 densely annotated UI animation videos. We systematically evaluate state-of-the-art VLMs on UI animation understanding, including their abilities to perceive the animation effects, identify animation purposes, and interpret animation meaning. Our results show that VLMs can reliably detect primitive motion. However, their high-level animation interpretation remains inconsistent, with substantial gaps relative to human performance. Finally, we use Motion, Context, and Perceptual Cues (MCPC) to probe factors affecting VLM performance, revealing key bottlenecks and directions for future improvement.