EO-Gym: A Multimodal, Interactive Environment for Earth Observation Agents

arXiv cs.AI / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces EO-Gym, a Gymnasium-style, local geospatial workspace designed for multimodal, tool-using Earth Observation (EO) agents that can operate interactively rather than in fixed single-turn tasks.
  • EO-Gym is backed by an indexed collection of 660k+ multimodal files (by location, time, and sensor type) and includes 35 EO-specialized tools across six task families to support uncertainty resolution via expanding regions and switching sensors.
  • The authors create EO-Gym-Data, a benchmark with 9,078 trajectories and 34,604 reasoning steps, built from eight public EO datasets plus Landsat and Sentinel-2 imagery.
  • Evaluations of 10 open and closed vision-language models (VLMs) show that even strong general-purpose models have difficulty with interactive EO reasoning, particularly for temporal and cross-modal workflows.
  • Fine-tuning Qwen3-VL-4B-Instruct on EO-Gym-Data to produce EO-Gym-4B boosts Pass@3 from 0.49 to 0.74 under the main evaluation setting, establishing an initial reference baseline.

Abstract

Earth Observation (EO) analysis is inherently interactive: resolving uncertainty often requires expanding the region of interest, retrieving historical observations, and switching across sensors such as optical and Synthetic Aperture Radar. However, most EO benchmarks collapse this process into fixed-input, single-turn tasks. To address this gap, we present EO-Gym, a controlled executable framework for multimodal, tool-using EO agents that formulates EO analysis as a Gymnasium-style local geospatial workspace backed by more than 660k multimodal files indexed by location, time, and sensor type, with 35 EO-specialized tools spanning six task families. Built on this environment, we construct EO-Gym-Data, a benchmark of 9,078 trajectories and 34,604 reasoning steps, and grounded in eight public EO datasets together with Landsat and Sentinel-2 imagery. Evaluating 10 open and closed VLMs shows that strong general-purpose models still struggle with interactive EO reasoning, especially on temporal and cross-modal workflows. As a reference baseline, EO-Gym-4B, obtained by fine-tuning Qwen3-VL-4B-Instruct on EO-Gym-Data, improves overall Pass@3 from 0.49 to 0.74 under the main evaluation setting. O-Gym provides a reproducible environment for interactive EO agents, operationalizing EO as an evidence-gathering problem that requires planning across geospatial, temporal, and sensing modality.