AIM: Intent-Aware Unified world action Modeling with Spatial Value Maps

arXiv cs.RO / 4/14/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces AIM, an intent-aware unified world action model that addresses a mismatch between video-based world modeling (scene evolution) and action generation (where/how to interact with intent).
  • AIM uses an explicit spatial interface by predicting an aligned spatial value map, routing future information to the action branch through value representations rather than decoding directly from future visuals.
  • The method builds on pretrained video generation with a mixture-of-transformers shared architecture and employs intent-causal attention to isolate relevant future cues for action.
  • It adds a self-distillation reinforcement learning stage that freezes the video and value branches, optimizing only the action head using dense rewards from projected value-map responses plus sparse task-level signals.
  • On the RoboTwin 2.0 benchmark, AIM reportedly reaches a 94.0% average success rate and shows larger gains for long-horizon and contact-sensitive manipulation tasks, supported by a new 30K-trajectory simulation dataset with value-map annotations.

Abstract

Pretrained video generation models provide strong priors for robot control, but existing unified world action models still struggle to decode reliable actions without substantial robot-specific training. We attribute this limitation to a structural mismatch: while video models capture how scenes evolve, action generation requires explicit reasoning about where to interact and the underlying manipulation intent. We introduce AIM, an intent-aware unified world action model that bridges this gap via an explicit spatial interface. Instead of decoding actions directly from future visual representations, AIM predicts an aligned spatial value map that encodes task-relevant interaction structure, enabling a control-oriented abstraction of future dynamics. Built on a pretrained video generation model, AIM jointly models future observations and value maps within a shared mixture-of-transformers architecture. It employs intent-causal attention to route future information to the action branch exclusively through the value representation. We further propose a self-distillation reinforcement learning stage that freezes the video and value branches and optimizes only the action head using dense rewards derived from projected value-map responses together with sparse task-level signals. To support training and evaluation, we construct a simulation dataset of 30K manipulation trajectories with synchronized multi-view observations, actions, and value-map annotations. Experiments on RoboTwin 2.0 benchmark show that AIM achieves a 94.0% average success rate, significantly outperforming prior unified world action baselines. Notably, the improvement is more pronounced in long-horizon and contact-sensitive manipulation tasks, demonstrating the effectiveness of explicit spatial-intent modeling as a bridge between visual world modeling and robot control.