Being-H0.7: A Latent World-Action Model from Egocentric Videos

arXiv cs.CV / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Being-H0.7 is a latent world-action model for visual-language-action (VLA) robot control that aims to incorporate future-aware reasoning without generating future video frames.
  • It addresses limitations of existing approaches: sparse action supervision can cause shortcut learning, and pixel-space future prediction is costly and may be indirect for control.
  • The model uses learnable latent queries as a compact “reasoning interface” between perception and action, enabling future-aware structure while staying efficient.
  • Training uses a dual-branch setup (a prior branch for inference-time latent state inference from current context, and a posterior branch used only during training from future observations), and aligns both in the latent space.
  • Experiments on six simulation benchmarks and real-world tasks show Being-H0.7 reaches state-of-the-art or comparable performance while matching the deployability of direct VLA policies.

Abstract

Visual-Language-Action models (VLAs) have advanced generalist robot control by mapping multimodal observations and language instructions directly to actions, but sparse action supervision often encourages shortcut mappings rather than representations of dynamics, contact, and task progress. Recent world-action models introduce future prediction through video rollouts, yet pixel-space prediction is a costly and indirect substrate for control, as it may model visual details irrelevant to action generation and introduces substantial training or inference overhead. We present Being-H0.7, a latent world-action model that brings future-aware reasoning into VLA-style policies without generating future frames. Being-H0.7 inserts learnable latent queries between perception and action as a compact reasoning interface, and trains them with a future-informed dual-branch design: a deployable prior branch infers latent states from the current context, while a training-only posterior branch replaces the queries with embeddings from future observations. Jointly aligning the two branches at the latent reasoning space leads the prior branch to reason future-aware, action-useful structure from current observations alone. At inference, Being-H0.7 discards the posterior branch and performs no visual rollout. Experiments across six simulation benchmarks and diverse real-world tasks show that Being-H0.7 achieves state-of-the-art or comparable performance, combining the predictive benefits of world models with the efficiency and deployability of direct VLA policies.