WorldMark: A Unified Benchmark Suite for Interactive Video World Models

arXiv cs.CV / 4/24/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The paper introduces WorldMark, a unified benchmark suite designed to enable fair cross-model comparisons for interactive image-to-video world models by using standardized scenes, trajectories, and a common control interface.
  • It includes a shared action-mapping layer that translates a WASD-style action vocabulary into each model’s native controls, allowing apples-to-apples evaluation across six major models.
  • WorldMark provides a hierarchical set of 500 test cases spanning first/third-person views, photorealistic and stylized scenes, and three difficulty tiers (Easy to Hard) with 20–60 second sequences.
  • The accompanying modular evaluation toolkit measures Visual Quality, Control Alignment, and World Consistency, and the authors plan to release all data, evaluation code, and outputs; they also launch World Model Arena (warena.ai) for live, side-by-side online battles and a public leaderboard.

Abstract

Interactive video generation models such as Genie, YUME, HY-World, and Matrix-Game are advancing rapidly, yet every model is evaluated on its own benchmark with private scenes and trajectories, making fair cross-model comparison impossible. Existing public benchmarks offer useful metrics such as trajectory error, aesthetic scores, and VLM-based judgments, but none supplies the standardized test conditions -- identical scenes, identical action sequences, and a unified control interface -- needed to make those metrics comparable across models with heterogeneous inputs. We introduce WorldMark, the first benchmark that provides such a common playing field for interactive Image-to-Video world models. WorldMark contributes: (1) a unified action-mapping layer that translates a shared WASD-style action vocabulary into each model's native control format, enabling apples-to-apples comparison across six major models on identical scenes and trajectories; (2) a hierarchical test suite of 500 evaluation cases covering first- and third-person viewpoints, photorealistic and stylized scenes, and three difficulty tiers from Easy to Hard spanning 20-60s; and (3) a modular evaluation toolkit for Visual Quality, Control Alignment, and World Consistency, designed so that researchers can reuse our standardized inputs while plugging in their own metrics as the field evolves. We will release all data, evaluation code, and model outputs to facilitate future research. Beyond offline metrics, we launch World Model Arena (warena.ai), an online platform where anyone can pit leading world models against each other in side-by-side battles and watch the live leaderboard.