Goal2Skill: Long-Horizon Manipulation with Adaptive Planning and Reflection

arXiv cs.RO / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Goal2Skill is a proposed dual-system framework for long-horizon embodied manipulation that addresses brittleness in partial observability, occlusions, and failure-prone multi-stage tasks.
  • The approach separates high-level semantic reasoning (a VLM-based agentic planner with structured task memory, goal decomposition, outcome verification, and error-driven correction) from low-level motor execution (a VLA-based visuomotor controller using diffusion-based action generation).
  • It forms a closed planning–execution loop that supports memory-aware reasoning, adaptive replanning, and explicit online recovery from execution failures.
  • Experiments on RMBench show a large performance gain, with a 32.4% average success rate versus 9.8% for the strongest baseline.
  • Ablation results indicate that structured memory and closed-loop recovery are key contributors to improved long-horizon manipulation performance.

Abstract

Recent vision-language-action (VLA) systems have demonstrated strong capabilities in embodied manipulation. However, most existing VLA policies rely on limited observation windows and end-to-end action prediction, which makes them brittle in long-horizon, memory-dependent tasks with partial observability, occlusions, and multi-stage dependencies. Such tasks require not only precise visuomotor control, but also persistent memory, adaptive task decomposition, and explicit recovery from execution failures. To address these limitations, we propose a dual-system framework for long-horizon embodied manipulation. Our framework explicitly separates high-level semantic reasoning from low-level motor execution. A high-level planner, implemented as a VLM-based agentic module, maintains structured task memory and performs goal decomposition, outcome verification, and error-driven correction. A low-level executor, instantiated as a VLA-based visuomotor controller, carries out each sub-task through diffusion-based action generation conditioned on geometry-preserving filtered observations. Together, the two systems form a closed loop between planning and execution, enabling memory-aware reasoning, adaptive replanning, and robust online recovery. Experiments on representative RMBench tasks show that the proposed framework substantially outperforms representative baselines, achieving a 32.4% average success rate compared with 9.8% for the strongest baseline. Ablation studies further confirm the importance of structured memory and closed-loop recovery for long-horizon manipulation.