HiVLA: A Visual-Grounded-Centric Hierarchical Embodied Manipulation System
arXiv cs.CV / 4/16/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces HiVLA, a hierarchical visual-grounded-centric embodied manipulation system that separates high-level semantic planning from low-level motor control to avoid degrading a base vision-language model’s reasoning during fine-tuning.
- In the high-level stage, a VLM planner performs task decomposition and visual grounding, outputting structured plans that include subtask instructions and target bounding boxes.
- For low-level execution, HiVLA uses a flow-matching Diffusion Transformer (DiT) action expert with a cascaded cross-attention mechanism to integrate global context, object-centric crops, and skill semantics for robust action generation.
- Experiments in both simulation and the real world report that HiVLA significantly outperforms end-to-end VLA baselines, with particular strength in long-horizon skill composition and small-object manipulation in cluttered environments.
- The proposed decoupled architecture is designed to preserve the base VLM’s zero-shot reasoning while allowing independent improvements to the planning and action components over time.
Related Articles

Black Hat Asia
AI Business
The AI Hype Cycle Is Lying to You About What to Learn
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
OpenAI Codex April 2026 Update Review: Computer Use, Memory & 90+ Plugins — Is the Hype Real?
Dev.to
Factory hits $1.5B valuation to build AI coding for enterprises
TechCrunch