Latent Bridge: Feature Delta Prediction for Efficient Dual-System Vision-Language-Action Model Inference

arXiv cs.RO / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • Latent Bridge addresses an inference bottleneck in dual-system Vision-Language-Action (VLA) robotics models by reducing redundant computation in the Vision-Language Model (VLM) backbone across control steps.
  • It predicts timestep-to-timestep VLM output deltas using a lightweight model, allowing the action head to use predicted features while the expensive VLM backbone is invoked only periodically.
  • The method is implemented in two different VLA variants (GR00T-N1.6 as a feature-space bridge and π0.5 as a KV-cache bridge), showing the approach generalizes across architectures.
  • Using a task-agnostic DAgger training pipeline, Latent Bridge maintains 95–100% performance retention across multiple benchmarks while cutting VLM calls by 50–75% and improving net per-episode speed by about 1.65–1.73×.

Abstract

Dual-system Vision-Language-Action (VLA) models achieve state-of-the-art robotic manipulation but are bottlenecked by the VLM backbone, which must execute at every control step while producing temporally redundant features. We propose Latent Bridge, a lightweight model that predicts VLM output deltas between timesteps, enabling the action head to operate on predicted outputs while the expensive VLM backbone is called only periodically. We instantiate Latent Bridge on two architecturally distinct VLAs: GR00T-N1.6 (feature-space bridge) and {\pi}0.5 (KV-cache bridge), demonstrating that the approach generalizes across VLA designs. Our task-agnostic DAgger training pipeline transfers across benchmarks without modification. Across four LIBERO suites, 24 RoboCasa kitchen tasks, and the ALOHA sim transfer-cube task, Latent Bridge achieves 95-100% performance retention while reducing VLM calls by 50-75%, yielding 1.65-1.73x net per-episode speedup.