AI Navigate

SciDesignBench: Benchmarking and Improving Language Models for Scientific Inverse Design

arXiv cs.LG / 3/16/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • SciDesignBench is introduced as a benchmark of 520 simulator-grounded tasks across 14 scientific domains and five settings to evaluate inverse design from desired outcomes to inputs.
  • On the 10-domain shared-core subset, the best zero-shot model achieves 29.0% success with higher parse rates, and simulator feedback influences performance, with the leaderboard depending on horizon (e.g., Sonnet 4.5 leading one-turn de novo design, Opus 4.6 leading after 20 turns).
  • Providing a starting seed design reshuffles the leaderboard, illustrating that constrained modification requires capabilities distinct from unconstrained de novo generation.
  • A simulator-feedback training recipe called RLSF is proposed; an 8B model tuned with RLSF raises single-turn success rates by 8–17 percentage points across three domains, highlighting potential to amortize test-time compute into model weights and establish simulator-grounded inverse design as both a scientific benchmark and practical tool.

Abstract

Many of the most important problems in science and engineering are inverse problems: given a desired outcome, find a design that achieves it. Evaluating whether a candidate meets the spec is often routine; a binding energy can be computed, a reactor yield simulated, a pharmacokinetic profile predicted. But searching a combinatorial design space for inputs that satisfy those targets is fundamentally harder. We introduce SciDesignBench, a benchmark of 520 simulator-grounded tasks across 14 scientific domains and five settings spanning single-shot design, short-horizon feedback, long-horizon refinement, and seed-design optimization. On the 10-domain shared-core subset, the best zero-shot model reaches only 29.0% success despite substantially higher parse rates. Simulator feedback helps, but the leaderboard changes with horizon: Sonnet 4.5 is strongest in one-turn de novo design, whereas Opus 4.6 is strongest after 20 turns of simulator-grounded refinement. Providing a starting seed design reshuffles the leaderboard again, demonstrating that constrained modification requires a fundamentally different capability from unconstrained de novo generation. We then introduce RLSF, a simulator-feedback training recipe. An RLSF-tuned 8B model raises single-turn success rates by 8-17 percentage points across three domains. Together, these results position simulator-grounded inverse design as both a benchmark for scientific reasoning and a practical substrate for amortizing expensive test-time compute into model weights.