PRTS: A Primitive Reasoning and Tasking System via Contrastive Representations

arXiv cs.AI / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • PRTS (Primitive Reasoning and Tasking System) is a new vision-language-action (VLA) foundation model that reframes robot pretraining from supervised behavior cloning to goal-conditioned reinforcement learning.
  • It treats language instructions as goals and uses contrastive reinforcement learning to learn a shared embedding space where state-action and goal embeddings quantitatively reflect goal reachability and feasibility over time.
  • The model learns dense goal-reachability supervision from offline trajectories without requiring reward annotations, integrating it into the VLM backbone with a role-aware causal mask that adds negligible overhead.
  • Pretrained on 167B tokens, PRTS achieves state-of-the-art results across multiple LIBERO variants, SimplerEnv, and 14 real-world tasks, with especially large improvements for long-horizon, contact-rich, and zero-shot novel-instruction scenarios.
  • Overall, the approach improves both execution success and long-horizon planning for general-purpose robotic policies by bridging semantic goal reasoning with temporal task progress.

Abstract

Vision-Language-Action (VLA) models advance robotic control via strong visual-linguistic priors. However, existing VLAs predominantly frame pretraining as supervised behavior cloning, overlooking the fundamental nature of robot learning as a goal-reaching process that requires understanding temporal task progress. We present \textbf{PRTS} (\textbf{P}rimitive \textbf{R}easoning and \textbf{T}asking \textbf{S}ystem), a VLA foundation model that reformulates pretraining through Goal-Conditioned Reinforcement Learning. By treating language instructions as goals and employing contrastive reinforcement learning, PRTS learns a unified embedding space where the inner product of state-action and goal embeddings approximates the log-discounted goal occupancy, the probability of reaching the language-specified goal from the current state-action, quantitatively assessing physical feasibility beyond static semantic matching. PRTS draws this dense goal-reachability supervision directly from offline trajectories without reward annotations, and folds it into the VLM backbone via a role-aware causal mask, incurring negligible overhead over vanilla behavior cloning. This paradigm endows the high-level reasoning system with intrinsic goal reachability awareness, bridging semantic reasoning and temporal task progress, and further benefits goal-conditioned action prediction. Pretrained on 167B tokens of diverse manipulation and embodied-reasoning data, PRTS reaches state-of-the-art performance on LIBERO, LIBERO-Pro, LIBERO-Plus, SimplerEnv, and a real-world suite of 14 complex tasks, with particularly substantial gains on long-horizon, contact-rich, and zero-shot novel-instruction settings, confirming that injecting goal-reachability awareness significantly improves both execution success and long-horizon planning of general-purpose robotic foundation policies.