WebVR: Benchmarking Multimodal LLMs for WebPage Recreation from Videos via Human-Aligned Visual Rubrics
arXiv cs.CV / 3/17/2026
📰 NewsModels & Research
Key Points
- WebVR introduces a dedicated benchmark to evaluate multimodal LLMs' ability to recreate webpages from demonstration videos, capturing interaction flow, timing, and motion continuity.
- The dataset contains 175 webpages created via a controlled synthesis pipeline to ensure varied, realistic demonstrations without overlap with existing pages.
- It includes a fine-grained, human-aligned visual rubric for comprehensive evaluation, with automatic rubric agreement at 96% with human preferences.
- Experiments across 19 models reveal gaps in reproducing fine-grained style and motion quality, signaling areas for improvement.
- The authors release the dataset, evaluation toolkit, and baseline results to facilitate future research on video-to-webpage generation.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA