RoboAgent: Chaining Basic Capabilities for Embodied Task Planning

arXiv cs.RO / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses embodied task planning, arguing that existing vision-language models struggle with multi-turn interaction, long-horizon reasoning, and extended context required in real-world-like environments.
  • It proposes RoboAgent, a capability-driven planning pipeline where a scheduler orchestrates multiple sub-capabilities, each maintaining its own context and producing intermediate reasoning or environment interactions.
  • The approach decomposes complex planning into a chain of simpler vision-language problems to improve performance while making reasoning more transparent and controllable.
  • RoboAgent uses a single VLM for the scheduler and all capabilities (no external tools) and is trained via a multi-stage process: behavior cloning, DAgger, and reinforcement learning with an expert policy.
  • Experiments on standard embodied task planning benchmarks reportedly confirm the effectiveness of the method, and the authors indicate code availability for reproducibility.

Abstract

This paper focuses on embodied task planning, where an agent acquires visual observations from the environment and executes atomic actions to accomplish a given task. Although recent Vision-Language Models (VLMs) have achieved impressive results in multimodal understanding and reasoning, their performance remains limited when applied to embodied planning that involves multi-turn interaction, long-horizon reasoning, and extended context analysis. To bridge this gap, we propose RoboAgent, a capability-driven planning pipeline in which the model actively invokes different sub-capabilities. Each capability maintains its own context, and produces intermediate reasoning results or interacts with the environment according to the query given by a scheduler. This framework decomposes complex planning into a sequence of basic vision-language problems that VLMs can better address, enabling a more transparent and controllable reasoning process. The scheduler and all capabilities are implemented with a single VLM, without relying on external tools. To train this VLM, we adopt a multi-stage paradigm that consists of: (1) behavior cloning with expert plans, (2) DAgger training using trajectories collected by the model, and (3) reinforcement learning guided by an expert policy. Across these stages, we exploit the internal information of the environment simulator to construct high-quality supervision for each capability, and we further introduce augmented and synthetic data to enhance the model's performance in more diverse scenarios. Extensive experiments on widely used embodied task planning benchmarks validate the effectiveness of the proposed approach. Our codes will be available at https://github.com/woyut/RoboAgent_CVPR26.