Libra-VLA: Achieving Learning Equilibrium via Asynchronous Coarse-to-Fine Dual-System

arXiv cs.CL / 4/29/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that many existing Vision-Language-Action (VLA) robotics models use a flat, monolithic generation approach that maps semantics directly to high-frequency motor commands, widening the semantic-to-actuation gap.
  • It introduces Libra-VLA, a coarse-to-fine dual-system architecture that decomposes robotic actions into discrete macro-direction tokens (semantic planning) and continuous micro-pose alignment (action refinement).
  • By explicitly balancing learning difficulty between the semantic planner and action refiner, the authors find performance peaks at an “inverted-U” optimum when decomposition granularity achieves a training equilibrium.
  • The method also uses asynchronous execution, leveraging the modular structure to improve scalability, robustness, and responsiveness for open-world manipulation tasks.

Abstract

Vision-Language-Action (VLA) models are a promising paradigm for generalist robotic manipulation by grounding high-level semantic instructions into executable physical actions. However, prevailing approaches typically adopt a monolithic generation paradigm, directly mapping visual-linguistic features to high-frequency motor commands in a flat, non-hierarchical fashion. This strategy overlooks the inherent hierarchy of robotic manipulation, where complex actions can be naturally modeled in a Hybrid Action Space, decomposing into discrete macro-directional reaching and continuous micro-pose alignment, severely widening the semantic-actuation gap and imposing a heavy representational burden on grounding high-level semantics to continuous actions. To address this, we introduce Libra-VLA, a novel Coarse-to-Fine Dual-System VLA architecture. We explicitly decouple the learning complexity into a coarse-to-fine hierarchy to strike a training equilibrium, while simultaneously leveraging this structural modularity to implement an asynchronous execution strategy. The Semantic Planner predicts discrete action tokens capturing macro-directional intent, while the Action Refiner conditions on coarse intent to generate high-frequency continuous actions for precise alignment. Crucially, our empirical analysis reveals that performance follows an inverted-U curve relative to action decomposition granularity, peaking exactly when the learning difficulty is balanced between the two sub-systems. With the asynchronous design, our approach offers a scalable, robust, and responsive solution for open-world manipulation.