Toward Autonomous Long-Horizon Engineering for ML Research

arXiv cs.CL / 4/15/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that long-horizon ML research engineering is harder than short-run autonomy because agents must maintain coherent progress across task understanding, environment setup, implementation, experimentation, and debugging over hours or days.
  • It introduces AiScientist, an autonomous system designed around structured orchestration plus durable state continuity, using hierarchical orchestration with a permission-scoped “File-as-Bus” workspace.
  • The approach emphasizes re-grounding specialized agents on persistent artifacts (analyses, plans, code, and experimental evidence) instead of relying mainly on conversational handoffs, aiming for “thin control over thick state.”
  • Experiments on two benchmarks show AiScientist improves PaperBench by an average of 10.54 points over the best matched baseline and achieves 81.82 Any Medal% on MLE-Bench Lite.
  • Ablation results indicate the File-as-Bus protocol is a major performance driver, with notable score drops (PaperBench −6.41, MLE-Bench Lite −31.82) when it is removed, framing long-horizon ML research as a systems coordination problem.

Abstract

Autonomous AI research has advanced rapidly, but long-horizon ML research engineering remains difficult: agents must sustain coherent progress across task comprehension, environment setup, implementation, experimentation, and debugging over hours or days. We introduce AiScientist, a system for autonomous long-horizon engineering for ML research built on a simple principle: strong long-horizon performance requires both structured orchestration and durable state continuity. To this end, AiScientist combines hierarchical orchestration with a permission-scoped File-as-Bus workspace: a top-level Orchestrator maintains stage-level control through concise summaries and a workspace map, while specialized agents repeatedly re-ground on durable artifacts such as analyses, plans, code, and experimental evidence rather than relying primarily on conversational handoffs, yielding thin control over thick state. Across two complementary benchmarks, AiScientist improves PaperBench score by 10.54 points on average over the best matched baseline and achieves 81.82 Any Medal% on MLE-Bench Lite. Ablation studies further show that File-as-Bus protocol is a key driver of performance, reducing PaperBench by 6.41 points and MLE-Bench Lite by 31.82 points when removed. These results suggest that long-horizon ML research engineering is a systems problem of coordinating specialized work over durable project state, rather than a purely local reasoning problem.