JoyAI-RA 0.1: A Foundation Model for Robotic Autonomy
arXiv cs.RO / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces JoyAI-RA 0.1, a vision-language-action (VLA) embodied foundation model designed to improve robotic autonomy in open-world settings.
- It targets key limitations of prior work, including insufficient diversity in training data and weak cross-embodiment generalization across different robot embodiments.
- JoyAI-RA uses a multi-source, multi-level pretraining approach that combines web data, large-scale egocentric human manipulation videos, simulation trajectory data, and real-robot data.
- The model includes explicit action-space unification to bridge embodiment gaps, particularly between human manipulation behaviors and robotic control, improving transfer of learned behaviors.
- The authors report that JoyAI-RA outperforms state-of-the-art methods on both simulation and real-world benchmarks, especially for diverse tasks requiring generalization.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to