ProGAL-VLA: Grounded Alignment through Prospective Reasoning in Vision-Language-Action Models

arXiv cs.RO / 4/14/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • ProGAL-VLA is a vision-language-action (VLA) approach designed to fix “language ignorance” in generalist robot agents by making actions sensitive to instruction changes.
  • The method builds a 3D entity-centric graph, uses a slow planner to generate symbolic sub-goals, and applies a Grounding Alignment Contrastive (GAC) loss to align sub-goals with grounded entities.
  • Actions are conditioned on a verified goal embedding, and attention entropy is used as an intrinsic ambiguity signal to support ambiguity-aware behavior without reducing performance on unambiguous tasks.
  • Reported results on LIBERO-Plus show major gains in robustness to robot perturbations (30.3→71.5%), a 3–4× reduction in language ignorance, and improved entity retrieval (0.41→0.71 Recall@1).
  • On a Custom Ambiguity Benchmark, ProGAL-VLA achieves AUROC 0.81 (vs. 0.52) and AUPR 0.79, significantly increasing clarification for ambiguous inputs (0.09→0.81) while maintaining unambiguous success.

Abstract

Vision language action (VLA) models enable generalist robotic agents but often exhibit language ignorance, relying on visual shortcuts and remaining insensitive to instruction changes. We present Prospective Grounding and Alignment VLA (ProGAL-VLA), which constructs a 3D entity-centric graph (GSM), uses a slow planner to produce symbolic sub-goals, and aligns them with grounded entities via a Grounding Alignment Contrastive (GAC) loss. All actions are conditioned on a verified goal embedding g_t, whose attention entropy provides an intrinsic ambiguity signal. On LIBERO-Plus, ProGAL-VLA increases robustness under robot perturbations from 30.3 to 71.5 percent, reduces language ignorance by 3x-4x, and improves entity retrieval from 0.41 to 0.71 Recall@1. On the Custom Ambiguity Benchmark, it reaches AUROC 0.81 (vs. 0.52), AUPR 0.79, and raises clarification on ambiguous inputs from 0.09 to 0.81 without harming unambiguous success. The verification bottleneck increases mutual information of language-actions, the GAC loss imposes an entity-level InfoNCE bound, and attention entropy yields calibrated selective prediction, indicating that explicit verified grounding is an effective path toward instruction-sensitive, ambiguity-aware agents.