InfiniteScienceGym: An Unbounded, Procedurally-Generated Benchmark for Scientific Analysis

arXiv cs.AI / 4/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • InfiniteScienceGym は、科学リポジトリを手続き的に生成し、検証可能なQAタスクと組み合わせることで、LLMの「実データに基づく推論」を定量評価するための新しいベンチマークを提案している。
  • シードから決定論的に自己完結型のリポジトリ(現実的なディレクトリ構造・ファイル・表形式データ)を生成し、特権QAジェネレータが解ける/解けない問題と厳密な正解を同時に用意する点が特徴である。
  • 既存ベンチマークの出版バイアス、既知知識バイアス、ラベルノイズ、巨大コーパス配布に伴う負担といった問題を、静的な大規模データ配布なしに補完しようとしている。

Abstract

Large language models are emerging as scientific assistants, but evaluating their ability to reason from empirical data remains challenging. Benchmarks derived from published studies and human annotations inherit publication bias, known-knowledge bias, label noise, and substantial storage requirements. We present InfiniteScienceGym, a procedurally generated benchmark of scientific repositories paired with a verifiable question-answering task. From a seed, the simulator deterministically generates a self-contained repository with realistic directory structure, files, and tabular data, and a privileged QA generator produces both answerable and unanswerable questions with exact ground truth. This makes it possible to evaluate evidence-grounded reasoning, abstention, and tool-mediated analysis in a controlled setting without distributing a large static corpus. InfiniteScienceGym complements real scientific benchmarks by targeting blind spots and failure modes that are hard to evaluate using published datasets alone. Evaluating both proprietary and open-weight models, we find that none achieve more than 45% accuracy overall, that recognizing unanswerable questions remains a major weakness, and that stronger models tend to use tools more effectively rather than simply consuming more tokens.