ReFineVLA: Multimodal Reasoning-Aware Generalist Robotic Policies via Teacher-Guided Fine-Tuning

arXiv cs.RO / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • ReFineVLA proposes a multimodal reasoning-aware framework that fine-tunes vision-language-action (VLA) robotic policies to explicitly include reasoning steps rather than only learning input-to-action mappings.
  • The approach augments robotic datasets with teacher-generated reasoning rationales, then fine-tunes pre-trained VLA models using this reasoning-enriched data to improve reasoning while preserving generalization.
  • The work includes attention map visualizations to verify alignment between visual observations, linguistic prompts, and the actions the robot is intended to execute.
  • On simulated long-horizon manipulation benchmarks in SimplerEnv (covering WidowX and Google Robot tasks), ReFineVLA reaches state-of-the-art success rates, outperforming the second-best method on both task sets.
  • Overall, the results suggest ReFineVLA improves multimodal understanding and cross-domain agreement between vision-language and action behavior in robotic manipulation.

Abstract

Vision-Language-Action (VLA) models have gained much attention from the research community thanks to their strength in translating multimodal observations with linguistic instructions into desired robotic actions. Despite their advancements, VLAs often overlook explicit reasoning and learn the functional input-action mappings, omitting crucial logical steps, which are especially pronounced in interpretability and generalization for complex, long-horizon manipulation tasks. In this work, we propose ReFineVLA, a multimodal reasoning-aware framework that fine-tunes VLAs with teacher-guided reasons. We first augment robotic datasets with reasoning rationales generated by an expert teacher model, guiding VLA models to learn to reason about their actions. Then, we fine-tune pre-trained VLAs with the reasoning-enriched datasets with ReFineVLA, while maintaining the underlying generalization abilities and boosting reasoning capabilities. We also conduct attention map visualization to analyze the alignment among visual observation, linguistic prompts, and to-be-executed actions of ReFineVLA, reflecting the model is ability to focus on relevant tasks and actions. Through this additional step, we explore that ReFineVLA-trained models exhibit a meaningful agreement between vision-language and action domains, highlighting the enhanced multimodal understanding and generalization. Evaluated across a suite of simulated manipulation benchmarks on SimplerEnv with both WidowX and Google Robot tasks, ReFineVLA achieves state-of-the-art performance, in success rate over the second-best method on the both the WidowX benchmark and Google Robot Tasks.