Characterizing Vision-Language-Action Models across XPUs: Constraints and Acceleration for On-Robot Deployment

arXiv cs.RO / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper analyzes how to deploy Vision-Language-Action (VLA) models on real robots by focusing on real-time inference constraints under strict cost and energy budgets rather than desktop GPUs.
  • It introduces model–hardware co-characterization and a cross-accelerator leaderboard (for GPUs/XPUs/NPUs) using a CET metric (Cost, Energy, Time), finding that appropriately “right-sized” edge devices can be more efficient than flagship GPUs while still meeting control-rate requirements.
  • Profiling reveals a consistent two-phase inference behavior—compute-bound vision-language (VLM) backbone followed by a memory-bound Action Expert—which can cause phase-dependent underutilization and inefficiency.
  • The authors propose DP-Cache and V-AEFusion to cut diffusion redundancy and enable asynchronous pipeline parallelism, reporting up to 2.9x speedups on GPUs and 6x on edge NPUs with only marginal success degradation.

Abstract

Vision-Language-Action (VLA) models are promising for generalist robot control, but on-robot deployment is bottlenecked by real-time inference under tight cost and energy budgets. Most prior evaluations rely on desktop-grade GPUs, obscuring the trade-offs and opportunities offered by heterogeneous edge accelerators (GPUs/XPUs/NPUs). We present a systematic analysis for low-cost VLA deployment via model-hardware co-characterization. First, we build a cross-accelerator leaderboard and evaluate model-hardware pairs under CET (Cost, Energy, Time), showing that right-sized edge devices can be more cost-/energy-efficient than flagship GPUs while meeting control-rate constraints. Second, using in-depth profiling, we uncover a consistent two-phase inference pattern: a compute-bound VLM backbone followed by a memory-bound Action Expert, which induces phase-dependent underutilization and hardware inefficiency. Finally, guided by these insights, we propose DP-Cache and V-AEFusion to reduce diffusion redundancy and enable asynchronous pipeline parallelism, achieving up to 2.9x speedup on GPUs and 6x on edge NPUs with only marginal success degradation. The example leaderboard website is available at: https://vla-leaderboard-01.vercel.app/.