TAG: Target-Agnostic Guidance for Stable Object-Centric Inference in Vision-Language-Action Models

arXiv cs.RO / 3/26/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a key reliability issue in Vision-Language-Action (VLA) robot policies: in cluttered scenes, many failures stem from instance-level grounding errors rather than truly infeasible motions.
  • It proposes TAG (Target-Agnostic Guidance), an inference-time guidance method that uses object-erased observations to counter distractor- and appearance-induced bias.
  • Drawing inspiration from classifier-free guidance (CFG), TAG computes a residual steering signal from the difference between policy outputs on original vs. object-erased inputs to strengthen reliance on correct object evidence.
  • TAG requires no policy architecture changes and can be integrated with existing VLA models with minimal additional training/inference modifications.
  • Experiments on LIBERO, LIBERO-Plus, and VLABench show TAG improves robustness in clutter and reduces near-miss grasps and wrong-object executions.

Abstract

Vision--Language--Action (VLA) policies have shown strong progress in mapping language instructions and visual observations to robotic actions, yet their reliability degrades in cluttered scenes with distractors. By analyzing failure cases, we find that many errors do not arise from infeasible motions, but from instance-level grounding failures: the policy often produces a plausible grasp trajectory that lands slightly off-target or even on the wrong object instance. To address this issue, we propose TAG (Target-Agnostic Guidance), a simple inference-time guidance mechanism that explicitly reduces distractor- and appearance-induced bias in VLA policies. Inspired by classifier-free guidance (CFG), TAG contrasts policy predictions under the original observation and an object-erased observation, and uses their difference as a residual steering signal that strengthens the influence of object evidence in the decision process. TAG does not require modifying the policy architecture and can be integrated with existing VLA policies with minimal training and inference changes. We evaluate TAG on standard manipulation benchmarks, including LIBERO, LIBERO-Plus, and VLABench, where it consistently improves robustness under clutter and reduces near-miss and wrong-object executions.