Clutter-Robust Vision-Language-Action Models through Object-Centric and Geometry Grounding

arXiv cs.RO / 4/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Existing vision-language-action (VLA) models often entangle perception and control in a single pipeline, weakening language-conditioned grounding and causing failures in real-world tabletop settings such as over-grasping absent targets and getting distracted by clutter.
  • The paper introduces OBEYED-VLA, which disentangles perception grounding from action reasoning by adding object-centric, geometry-aware grounding over multi-view inputs before feeding a pretrained VLA policy.
  • OBEYED-VLA uses a VLM-based stage to select task-relevant object regions across cameras, paired with a geometric grounding stage that prioritizes 3D structure over appearance.
  • The approach is fine-tuned on single-object demonstrations collected without clutter, and on a UR10e tabletop setup it significantly improves robustness across multiple hard regimes including distractors, absent-target rejection, background changes, and cluttered manipulation of unseen objects.
  • Ablation results show that both semantic (object-centric) grounding and geometry-aware grounding are essential for the observed performance gains and better generalization.

Abstract

Recent Vision-Language-Action (VLA) models have made impressive progress toward general-purpose robotic manipulation by post-training large Vision-Language Models (VLMs) for action prediction. Yet most VLAs entangle perception and control in a monolithic pipeline optimized purely for action, which can erode language-conditioned grounding. In our real-world tabletop tests, policies over-grasp when the target is absent, are distracted by clutter, and overfit to background appearance. To address these issues, we propose OBEYED-VLA (OBject-centric and gEometrY groundED VLA), a framework that explicitly disentangles perceptual grounding from action reasoning. Instead of operating directly on raw RGB, OBEYED-VLA augments VLAs with a perception module that grounds multi-view inputs into task-conditioned, object-centric, and geometry-aware observations. This module includes a VLM-based object-centric grounding stage that selects task-relevant object regions across camera views, along with a complementary geometric grounding stage that emphasizes the 3D structure of these objects over their appearance. The resulting grounded views are then fed to a pretrained VLA policy, which we fine-tune exclusively on single-object demonstrations collected without environmental clutter or non-target objects. On a real-world UR10e tabletop setup, OBEYED-VLA substantially improves robustness over strong VLA baselines across four challenging regimes and multiple difficulty levels: distractor objects, absent-target rejection, background appearance changes, and cluttered manipulation of unseen objects. Ablation studies confirm that both semantic grounding and geometry-aware grounding are critical to these gains. Overall, the results indicate that making perception an explicit, object-centric component is an effective way to strengthen and generalize VLA-based robotic manipulation.