SciNav: A General Agent Framework for Scientific Coding Tasks

arXiv cs.CL / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SciNav (Scientific Navigator), a general agent framework tailored specifically to scientific coding tasks where outputs are executable and objectively evaluable via benchmarks.
  • SciNav is designed to work under constrained search budgets by using tree search with pairwise relative (comparative) judgments to select and prune solution branches efficiently.
  • Instead of relying on fixed success metrics or long search cycles, the framework progressively narrows candidates along the most promising branches using relative comparisons.
  • Experiments on two benchmarks show SciNav significantly outperforms direct prompting and prior agents such as OpenHands and Self-Debug across multiple base models, task types, and difficulty levels.
  • The results also beat baseline strategies including random selection and LLM absolute scoring, supporting the claim that relative judgment is more discriminative for this setting.

Abstract

Autonomous science agents built on large language models (LLMs) are increasingly used to generate hypotheses, design experiments, and produce reports. However, prior work mainly targets open-ended scientific problems with subjective outputs that are difficult to evaluate. Scientific coding benchmarks, by contrast, provide executable outputs for objective assessment. Existing approaches remain engineering-driven pipelines, revealing the need for structured, end-to-end science agent frameworks for scientific coding tasks. We address this gap by focusing on scientific coding tasks, where evaluation can be made rigorously, and introducing an agent framework SciNav (Scientific Navigator) that enables more effective solution exploration. Our framework is designed to operate under constrained search budgets, moving beyond reliance on pre-defined success metrics and prolonged search cycles. Inspired by findings that comparative judgments often reveal finer-grained quality differences and therefore provide greater discriminative power than absolute scoring, our framework leverages pairwise relative judgments within a tree search process to select top-K promising solution branches, prune low-potential ones, and progressively narrow down the solution candidates on the selected branches guided by relative comparisons. We demonstrate our agent's effectiveness across different types of tasks on two benchmarks. Experiments show that SciNav significantly outperforms direct prompting and prior agents like OpenHands and Self-Debug across different base models, task types, and difficulty levels, and exceeds different frontier comparators such as random selection and LLM absolute scoring. These results confirm the strength of our agent design and highlight the effectiveness of relative judgment-guided top-K search for high-quality scientific coding, marking a step toward more practical science agents.