Environmental Understanding Vision-Language Model for Embodied Agent

arXiv cs.CV / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents EUEA, a framework that fine-tunes vision-language models for embodied agents to improve environmental understanding during instruction-following.
  • EUEA targets four skills—object perception, task planning, action understanding, and goal recognition—so the agent can form more reliable interaction subgoals and verify success.
  • It adds a recovery step that tries alternative actions to fix failure cases, plus a GRPO stage to refine inconsistent skill predictions.
  • Experiments on ALFRED show EUEA significantly beats a behavior-cloning baseline, improving average success rate by 8.86%, with an additional 3.03% from the recovery and GRPO stages.
  • Skill-level analyses highlight specific environmental understanding weaknesses in both closed- and open-source VLMs and outline what capabilities are needed for effective agent-environment interaction.

Abstract

Vision-language models (VLMs) have shown strong perception and reasoning abilities for instruction-following embodied agents. However, despite these abilities and their generalization performance, they still face limitations in environmental understanding, often failing on interactions or relying on environment metadata during execution. To address this challenge, we propose a novel framework named Environmental Understanding Embodied Agent (EUEA), which fine-tunes four core skills: 1) object perception for identifying relevant objects, 2) task planning for generating interaction subgoals, 3) action understanding for judging success likelihood, and 4) goal recognition for determining goal completion. By fine-tuning VLMs with EUEA skills, our framework enables more reliable task execution for instruction-following. We further introduce a recovery step that leverages these core skills and a group relative policy optimization (GRPO) stage that refines inconsistent skill predictions. The recovery step samples alternative actions to correct failure cases, and the GRPO stage refines inconsistent skill predictions. Across ALFRED tasks, our VLM significantly outperforms a behavior-cloning baseline, achieving an 8.86% improvement in average success rate. The recovery and GRPO stages provide an additional 3.03% gain, further enhancing overall performance. Finally, our skill-level analyses reveal key limitations in the environmental understanding of closed- and open-source VLMs and identify the capabilities necessary for effective agent-environment interaction.