EOS-Bench: A Comprehensive Benchmark for Earth Observation Satellite Scheduling

arXiv cs.RO / 4/29/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper introduces EOS-Bench, an open-source, unified benchmark framework for evaluating Earth observation satellite imaging scheduling algorithms under realistic, hard-to-optimize conditions.
  • EOS-Bench generates 1,390 scenarios and 13,900 benchmark instances using high-fidelity orbital dynamics and platform constraints, scaling from small validation cases to problems with up to 1,000 satellites and 10,000 requests.
  • It includes a scenario characterization method that measures structural difficulty using factors like opportunity density, task flexibility, conflict intensity, and satellite congestion.
  • A multidimensional evaluation protocol is proposed, comparing methods on five metrics (task profit, completion rate, workload balance, timeliness, and runtime) across both agile and non-agile satellite settings.
  • Experiments covering mixed-integer programming, heuristics, meta-heuristics, and deep reinforcement learning show EOS-Bench can effectively distinguish performance across problem scales and reveal trade-offs between solution quality and computational efficiency.

Abstract

Earth observation satellite imaging scheduling is a challenging NP-hard combinatorial optimisation problem central to space mission operations. While next-generation agile Earth observation satellites (EOS) increase operational flexibility, they also significantly raise scheduling complexity. The lack of a unified, open-source benchmark makes it difficult to compare algorithms across studies. This paper introduces EOS-Bench, a comprehensive framework for systematic and reproducible evaluation of scheduling methods. By integrating high-fidelity orbital dynamics and platform constraints, EOS-Bench generates 1,390 scenarios and 13,900 benchmark instances, spanning from small-scale validation cases to large coordination problems with up to 1,000 satellites and 10,000 requests. We further propose a scenario characterisation scheme to quantify structural difficulty based on factors such as opportunity density, task flexibility, conflict intensity, and satellite congestion. A multidimensional evaluation protocol is introduced, assessing performance across five metrics: task profit, completion rate, workload balance, timeliness, and runtime. The framework is evaluated using mixed-integer programming, heuristics, meta-heuristics, and deep reinforcement learning across both agile and non-agile settings. Results show that EOS-Bench effectively distinguishes solver performance across scales and conditions, revealing trade-offs between solution quality and computational efficiency, and providing deeper insight into scenario complexity. EOS-Bench offers a unified and extensible open testbed for advancing research in Earth observation satellite scheduling. The code and data are available at https://github.com/Ethan19YQ/EOS-Bench.