StableIDM: Stabilizing Inverse Dynamics Model against Manipulator Truncation via Spatio-Temporal Refinement

arXiv cs.RO / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • StableIDM addresses a key weakness of inverse dynamics models (IDMs) in embodied AI: performance collapses when the manipulator is truncated, making state recovery ill-posed and control unstable.
  • The method stabilizes action prediction under partial observability using a spatio-temporal refinement framework with auxiliary robot-centric masking, geometry-aware Directional Feature Aggregation (DFA), and motion-continuity-based Temporal Dynamics Refinement (TDR).
  • Experiments on the AgiBot benchmark show a 12.1% improvement in strict action accuracy under severe truncation compared with prior approaches.
  • In real-robot replay and downstream systems, StableIDM increases average task success by 9.7%, raises end-to-end grasp success by 11.5% when decoding video-generated plans, and improves VLA real-robot success by 17.6% when used as an automatic annotator.

Abstract

Inverse Dynamics Models (IDMs) map visual observations to low-level action commands, serving as central components for data labeling and policy execution in embodied AI. However, their performance degrades severely under manipulator truncation, a common failure mode that makes state recovery ill-posed and leads to unstable control. We present StableIDM, a spatio-temporal framework that refines features from visual inputs to stabilize action predictions under such partial observability. StableIDM integrates three complementary components: (1) auxiliary robot-centric masking to suppress background clutter, (2) Directional Feature Aggregation (DFA) for geometry-aware spatial reasoning, which extracts anisotropic features along directions inferred from the visible arm and (3) Temporal Dynamics Refinement (TDR) to smooth and correct predictions via motion continuity. Extensive evaluations validate our approach: StableIDM improves strict action accuracy by 12.1% under severe truncation on the AgiBot benchmark, and increases average task success by 9.7% in real-robot replay. Moreover, it boosts end-to-end grasp success by 11.5% when decoding video-generated plans, and improves downstream VLA real-robot success by 17.6% when functioning as an automatic annotator. These results demonstrate that StableIDM provides a robust and scalable backbone for both policy execution and data generation in embodied artificial intelligence.