Odysseus: Scaling VLMs to 100+ Turn Decision-Making in Games via Reinforcement Learning

arXiv cs.AI / 5/4/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper explores how to train vision-language models (VLMs) with reinforcement learning for long-horizon, interactive decision-making in Super Mario Land, requiring 100+ turns.
  • It analyzes key RL algorithm components and proposes a modified PPO approach using a lightweight turn-level critic to improve training stability and sample efficiency versus critic-free alternatives.
  • Experiments show that starting from pretrained VLMs supplies strong action priors, which boosts sample efficiency and reduces the need for manual action engineering compared with training deep RL from scratch.
  • The authors introduce Odysseus, an open training framework for VLM agents, reporting sizable in-game gains (including at least 3x average game progress over frontier models) and better cross-game generalization while preserving general-domain abilities.

Abstract

Given the rapidly growing capabilities of vision-language models (VLMs), extending them to interactive decision-making tasks such as video games has emerged as a promising frontier. However, existing approaches either rely on large-scale supervised fine-tuning (SFT) on human trajectories or apply reinforcement learning (RL) only in relatively short-horizon settings (typically around 20--30 turns). In this work, we study RL-based training of VLMs for long-horizon decision-making in Super Mario Land, a visually grounded environment requiring 100+ turns of interaction with coordinated perception, reasoning, and action. We begin with a systematic investigation of key algorithmic components and propose an adapted variant of PPO with a lightweight turn-level critic, which substantially improves training stability and sample efficiency over critic-free methods such as GRPO and Reinforce++. We further show that pretrained VLMs provide strong action priors, significantly improving sample efficiency during RL training and reducing the need for manual design choices such as action engineering, compared to classical deep RL trained from scratch. Building on these insights, we introduce Odysseus, an open training framework for VLM agents, achieving substantial gains across multiple levels of the game and at least 3 times average game progresses than frontier models. Moreover, the trained models exhibit consistent improvements under both in-game and cross-game generalization settings, while maintaining general-domain capabilities. Overall, our results identify key ingredients for making RL stable and effective in long-horizon, multi-modal settings, and provide practical guidance for developing VLMs as embodied agents.