Stargazer: A Scalable Model-Fitting Benchmark Environment for AI Agents under Astrophysical Constraints

arXiv cs.LG / 4/20/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • Stargazer is a new scalable benchmark environment for evaluating autonomous AI agents on dynamic, iterative physics-grounded model-fitting tasks using radial-velocity (RV) time-series inference.
  • The benchmark includes 120 tasks across three difficulty tiers, featuring 20 real archival cases that span from high-SNR single-planet systems to complex multi-planet, low-SNR configurations.
  • An evaluation of eight frontier agents finds a recurring gap: agents may produce good statistical fits but often fail to recover physically correct system parameters while violating physical constraints.
  • The study reports that increasing test-time compute brings only marginal improvements, and excessive token use often correlates with recursive failure loops rather than productive exploration.
  • Stargazer is positioned as a platform to train, evaluate, scaffold, and scale agent strategies, and the simulation-driven methodology is suggested to generalize to other scientific model-fitting problems.

Abstract

The rise of autonomous AI agents suggests that dynamic benchmark environments with built-in feedback on scientifically grounded tasks are needed to evaluate the capabilities of these agents in research work. We introduce Stargazer, a scalable environment for evaluating AI agents on dynamic, iterative physics-grounded model-fitting tasks using inference on radial-velocity (RV) time series data. Stargazer comprises 120 tasks across three difficulty tiers, including 20 real archival cases, covering diverse scenarios ranging from high-SNR single-planet systems to complex multi-planetary configurations requiring involved low-SNR analysis. Our evaluation of eight frontier agents reveals a gap between numerical optimization and adherence to physical constraints: although agents often achieve a good statistical fit, they frequently fail to recover correct physical system parameters, a limitation that persists even when agents are equipped with vanilla skills. Furthermore, increasing test-time compute yields only marginal gains, with excessive token usage often reflecting recursive failure loops rather than meaningful exploration. Stargazer presents an opportunity to train, evaluate, scaffold, and scale strategies on a model-fitting problem of practical research relevance today. Our methodology to design a simulation-driven environment for AI agents presumably generalizes to many other model-fitting problems across scientific domains. Source code and the project website are available at https://github.com/Gudmorning2025/Stargazer and https://gudmorning2025.github.io/Stargazer, respectively.