SOMA: Strategic Orchestration and Memory-Augmented System for Vision-Language-Action Model Robustness via In-Context Adaptation

arXiv cs.RO / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SOMA, a memory- and attribution-driven orchestration framework designed to improve Vision-Language-Action (VLA) model robustness to perceptual noise and out-of-distribution (OOD) environments without parameter fine-tuning.
  • SOMA upgrades frozen VLA policies using an online pipeline that combines Dual-Memory Retrieval-Augmented Generation (RAG), an Attribution-Driven LLM orchestrator, and flexible MCP-based intervention mechanisms.
  • An offline Memory Consolidation module distills execution traces into reliable priors to support better long-term decision consistency.
  • Experiments on LIBERO-PRO and new LIBERO-SOMA benchmarks across pi0, pi0.5, and SmolVLA show an average absolute success rate gain of 56.6%, including a 89.1% improvement for long-horizon task chaining.
  • The authors provide a project page and open-source code to enable reproducibility and further experimentation with the proposed system.

Abstract

Despite the promise of Vision-Language-Action (VLA) models as generalist robotic controllers, their robustness against perceptual noise and environmental variations in out-of-distribution (OOD) tasks remains fundamentally limited by the absence of long-term memory, causal failure attribution, and dynamic intervention capability. To address this, we propose SOMA, a Strategic Orchestration and Memory-Augmented System that upgrades frozen VLA policies for robust in-context adaptation without parameter fine-tuning. Specifically, SOMA operates through an online pipeline of contrastive Dual-Memory Retrieval-Augmented Generation (RAG), an Attribution-Driven Large-Language-Model (LLM) Orchestrator, and extensible Model Context Protocol (MCP) interventions, while an offline Memory Consolidation module continuously distills the execution traces into reliable priors. Experimental evaluations across three backbone models (pi0, pi0.5, and SmolVLA) on LIBERO-PRO and our proposed LIBERO-SOMA benchmarks demonstrate that SOMA achieves an average absolute success rate gain of 56.6%. This includes a significant absolute improvement of 89.1% in long-horizon task chaining. Project page and source code are available at: https://github.com/LZY-1021/SOMA.