RobotArena $\infty$: Scalable Robot Benchmarking via Real-to-Sim Translation

arXiv cs.RO / 3/23/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • RobotArena Infinity presents a scalable benchmarking framework that shifts real-world robot policy evaluation into large-scale simulated environments with online human feedback.
  • The framework automatically converts video demonstrations from existing robot datasets into digital twins using vision-language models, 2D-to-3D generative modeling, and differentiable rendering.
  • Evaluation combines automated vision-language-model-guided scoring with scalable human preference judgments collected from crowdworkers to reduce manual supervision.
  • Robustness is tested by systematically perturbing simulations (textures, object placements, etc.) to assess policy generalization under controlled variation.
  • The goal is a continuously evolving, reproducible benchmark that addresses the scalability, safety, and reproducibility gaps in real-world robotic testing.

Abstract

The pursuit of robot generalists, agents capable of performing diverse tasks across diverse environments, demands rigorous and scalable evaluation. Yet real-world testing of robot policies remains fundamentally constrained: it is labor-intensive, slow, unsafe at scale, and difficult to reproduce. As policies expand in scope and complexity, these barriers only intensify, since defining "success" in robotics often hinges on nuanced human judgments of execution quality. We introduce RobotArena Infinity, a new benchmarking framework that overcomes these challenges by shifting vision-language-action (VLA) evaluation into large-scale simulated environments augmented with online human feedback. Leveraging advances in vision-language models, 2D-to-3D generative modeling, and differentiable rendering, our approach automatically converts video demonstrations from widely used robot datasets into simulated counterparts. Within these digital twins, we assess VLA policies using both automated vision-language-model-guided scoring and scalable human preference judgments collected from crowdworkers, transforming human involvement from tedious scene setup, resetting, and safety supervision into lightweight preference comparisons. To measure robustness, we systematically perturb simulated environments along multiple axes, including textures and object placements, stress-testing policy generalization under controlled variation. The result is a continuously evolving, reproducible, and scalable benchmark for real-world-trained robot manipulation policies, addressing a critical missing capability in today's robotics landscape.