AI Navigate

Overcoming Visual Clutter in Vision Language Action Models via Concept-Gated Visual Distillation

arXiv cs.CV / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • CGVD is a training-free, model-agnostic inference framework to stabilize Vision-Language-Action policies in cluttered environments.
  • It splits instructions into safe and distractor sets and uses a two-layer target refinement (cross-validation and spatial disambiguation) to penalize false positives.
  • It uses Fourier-based inpainting to generate a clean observation that suppresses semantic distractors while preserving spatial geometry and proprioception.
  • Experimental results show CGVD significantly improves success rates in dense clutter tasks (77.5% vs 43.0%), preventing performance collapse.
  • The study asserts inference-time visual distillation is a critical prerequisite for robust robotic manipulation in clutter.

Abstract

Vision-Language-Action (VLA) models demonstrate impressive zero-shot generalization but frequently suffer from a "Precision-Reasoning Gap" in cluttered environments. This failure is driven by background-induced feature dilution, where high-frequency semantic noise corrupts the geometric grounding required for precise manipulation. To bridge this gap, we propose Concept-Gated Visual Distillation (CGVD), a training-free, model-agnostic inference framework that stabilizes VLA policies. CGVD operates by parsing instructions into safe and distractor sets, utilizing a two-layer target refinement process--combining cross-validation and spatial disambiguation--to explicitly penalize false positives and isolate genuine manipulation targets. We then process the scene via Fourier-based inpainting, generating a clean observation that actively suppresses semantic distractors while preserving critical spatial geometry and visual proprioception. Extensive evaluations in highly cluttered manipulation tasks demonstrate that CGVD prevents performance collapse. In environments with dense semantic distractors, our method significantly outperforms state-of-the-art baselines, achieving a 77.5% success rate compared to the baseline's 43.0%. By enforcing strict attribute adherence, CGVD establishes inference-time visual distillation as a critical prerequisite for robust robotic manipulation in the clutter.