ATLAS: An Annotation Tool for Long-horizon Robotic Action Segmentation

arXiv cs.AI / 4/30/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper introduces ATLAS, an annotation tool designed specifically for long-horizon robotic action segmentation with accurate temporal boundaries.
  • ATLAS enables time-synchronized visualization of multi-modal robot data, including multi-view video plus proprioceptive signals like gripper state and force/torque.
  • It supports widely used robotics dataset formats (e.g., ROS bags and RLDS) and provides direct support for datasets such as REASSEMBLE, with an extensible modular layer for new formats.
  • In experiments on a contact-rich assembly task, ATLAS cut average per-action annotation time by at least 6% versus ELAN, improved expert temporal alignment by over 2.8%, and reduced boundary error by about fivefold compared with vision-only tools.
  • The tool uses a keyboard-centric interface to minimize annotation effort and increase annotation efficiency for training/evaluating manipulation policy learning methods.

Abstract

Annotating long-horizon robotic demonstrations with precise temporal action boundaries is crucial for training and evaluating action segmentation and manipulation policy learning methods. Existing annotation tools, however, are often limited: they are designed primarily for vision-only data, do not natively support synchronized visualization of robot-specific time-series signals (e.g., gripper state or force/torque), or require substantial effort to adapt to different dataset formats. In this paper, we introduce ATLAS, an annotation tool tailored for long-horizon robotic action segmentation. ATLAS provides time-synchronized visualization of multi-modal robotic data, including multi-view video and proprioceptive signals, and supports annotation of action boundaries, action labels, and task outcomes. The tool natively handles widely used robotics dataset formats such as ROS bags and the Reinforcement Learning Dataset (RLDS) format, and provides direct support for specific datasets such as REASSEMBLE. ATLAS can be easily extended to new formats via a modular dataset abstraction layer. Its keyboard-centric interface minimizes annotation effort and improves efficiency. In experiments on a contact-rich assembly task, ATLAS reduced the average per-action annotation time by at least 6% compared to ELAN, while the inclusion of time-series data improved temporal alignment with expert annotations by more than 2.8% and decreased boundary error fivefold compared to vision-only annotation tools.