GCImOpt: Learning efficient goal-conditioned policies by imitating optimal trajectories

arXiv cs.RO / 4/27/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • GCImOpt proposes learning efficient goal-conditioned control policies via imitation learning using high-quality datasets generated by trajectory optimization, avoiding costly or suboptimal demonstrations.
  • The dataset generation method is computationally efficient, enabling thousands of optimal trajectories in minutes on a laptop, and it includes an augmentation technique that uses intermediate states as additional goals to expand the dataset size by an order of magnitude.
  • Using these generated datasets, the approach trains goal-conditioned neural network policies that can drive systems toward arbitrary goals across multiple control tasks.
  • Experiments on cart-pole, 2D/3D quadcopter stabilization, and 6-DoF robot-arm point reaching show high success rates and near-optimal control behavior with compact models (under 80k parameters) that can run far faster than trajectory optimization solvers.
  • The authors release videos, code, datasets, and pretrained policies under a free software license, supporting replication and onboard deployment for resource-constrained controllers.

Abstract

Imitation learning is a well-established approach for machine-learning-based control. However, its applicability depends on having access to demonstrations, which are often expensive to collect and/or suboptimal for solving the task. In this work, we present GCImOpt, an approach to learn efficient goal-conditioned policies by training on datasets generated by trajectory optimization. Our approach for dataset generation is computationally efficient, can generate thousands of optimal trajectories in minutes on a laptop computer, and produces high-quality demonstrations. Further, by means of a data augmentation scheme that treats intermediate states as goals, we are able to increase the training dataset size by an order of magnitude. Using our generated datasets, we train goal-conditioned neural network policies that can control the system towards arbitrary goals. To demonstrate the generality of our approach, we generate datasets and then train policies for various control tasks, namely cart-pole stabilization, planar and three-dimensional quadcopter stabilization, and point reaching using a 6-DoF robot arm. We show that our trained policies can achieve high success rates and near-optimal control profiles, all while being small (less than 80,000 neural network parameters) and fast enough (up to more than 6,000 times faster than a trajectory optimization solver) that they could be deployed onboard resource-constrained controllers. We provide videos, code, datasets and pre-trained policies under a free software license; see our project website https://jongoiko.github.io/gcimopt/.