AI Navigate

Red-Teaming Vision-Language-Action Models via Quality Diversity Prompt Generation for Robust Robot Policies

arXiv cs.AI / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Q-DIG, a Quality Diversity-based red-teaming method that identifies diverse, task-relevant natural-language instructions that cause failures in Vision-Language-Action (VLA) robots to improve robustness.
  • Q-DIG combines Quality Diversity techniques with Vision-Language Models to generate a broad spectrum of adversarial prompts that reveal vulnerabilities in VLA behavior.
  • Experiments across simulation benchmarks show Q-DIG discovers more diverse and meaningful failure modes than baseline approaches, and fine-tuning VLA on generated prompts improves task success on unseen instructions.
  • User studies indicate the prompts are more natural and human-like than baselines, and real-world evaluations align with simulation results.

Abstract

Vision-Language-Action (VLA) models have significant potential to enable general-purpose robotic systems for a range of vision-language tasks. However, the performance of VLA-based robots is highly sensitive to the precise wording of language instructions, and it remains difficult to predict when such robots will fail. To improve the robustness of VLAs to different wordings, we present Q-DIG (Quality Diversity for Diverse Instruction Generation), which performs red-teaming by scalably identifying diverse natural language task descriptions that induce failures while remaining task-relevant. Q-DIG integrates Quality Diversity (QD) techniques with Vision-Language Models (VLMs) to generate a broad spectrum of adversarial instructions that expose meaningful vulnerabilities in VLA behavior. Our results across multiple simulation benchmarks show that Q-DIG finds more diverse and meaningful failure modes compared to baseline methods, and that fine-tuning VLAs on the generated instructions improves task success rates. Furthermore, results from a user study highlight that Q-DIG generates prompts judged to be more natural and human-like than those from baselines. Finally, real-world evaluations of Q-DIG prompts show results consistent with simulation, and fine-tuning VLAs on the generated prompts further success rates on unseen instructions. Together, these findings suggest that Q-DIG is a promising approach for identifying vulnerabilities and improving the robustness of VLA-based robots. Our anonymous project website is at qdigvla.github.io.