LARY: A Latent Action Representation Yielding Benchmark for Generalizable Vision-to-Action Alignment

arXiv cs.RO / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces LARY, a benchmark and evaluation framework to test how well latent action representations support generalizable vision-to-action alignment at both semantic (what to do) and control levels (how to do).
  • LARY is built from large-scale human video and auxiliary data, totaling 1,000 hours over 1.0M videos across 151 action categories, plus 620K image pairs and 595K motion trajectories across varied embodiments and environments.
  • Experiments show that general visual foundation models trained without explicit action supervision outperform specialized embodied latent action models on the benchmark.
  • The study finds that latent-based visual representations align more closely with physical action space than pixel-based representations.
  • Overall, the results support the idea that general visual representations encode action-relevant knowledge for physical control and that semantic abstraction is a more effective route from vision to action than pixel reconstruction.

Abstract

While the shortage of explicit action data limits Vision-Language-Action (VLA) models, human action videos offer a scalable yet unlabeled data source. A critical challenge in utilizing large-scale human video datasets lies in transforming visual signals into ontology-independent representations, known as latent actions. However, the capacity of latent action representation to derive robust control from visual observations has yet to be rigorously evaluated. We introduce the Latent Action Representation Yielding (LARY) Benchmark, a unified framework for evaluating latent action representations on both high-level semantic actions (what to do) and low-level robotic control (how to do). The comprehensively curated dataset encompasses over one million videos (1,000 hours) spanning 151 action categories, alongside 620K image pairs and 595K motion trajectories across diverse embodiments and environments. Our experiments reveal two crucial insights: (i) General visual foundation models, trained without any action supervision, consistently outperform specialized embodied latent action models. (ii) Latent-based visual space is fundamentally better aligned to physical action space than pixel-based space. These results suggest that general visual representations inherently encode action-relevant knowledge for physical control, and that semantic-level abstraction serves as a fundamentally more effective pathway from vision to action than pixel-level reconstruction.