AutoSOTA: An End-to-End Automated Research System for State-of-the-Art AI Model Discovery

arXiv cs.CL / 4/8/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • AutoSOTA is an end-to-end automated research system designed to reproduce and then empirically improve state-of-the-art AI models from recent top-tier papers into new SOTA results.
  • The approach uses a multi-agent architecture with eight specialized agents covering paper-to-code grounding, environment setup and repair, long-horizon experiment tracking, idea generation/scheduling, and validity supervision to reduce spurious improvements.
  • AutoSOTA structures its workflow into three coupled stages: resource preparation & goal setting, experiment evaluation, and reflection & ideation.
  • In evaluations using papers from eight major AI conferences (filtered for code availability and feasible execution cost), the system reportedly discovers 105 new SOTA models, averaging about five hours per paper.
  • Case studies across domains such as LLMs, NLP, computer vision, time series, and optimization suggest it can go beyond hyperparameter tuning toward architectural, algorithmic, and workflow-level improvements.

Abstract

Artificial intelligence research increasingly depends on prolonged cycles of reproduction, debugging, and iterative refinement to achieve State-Of-The-Art (SOTA) performance, creating a growing need for systems that can accelerate the full pipeline of empirical model optimization. In this work, we introduce AutoSOTA, an end-to-end automated research system that advances the latest SOTA models published in top-tier AI papers to reproducible and empirically improved new SOTA models. We formulate this problem through three tightly coupled stages: resource preparation and goal setting; experiment evaluation; and reflection and ideation. To tackle this problem, AutoSOTA adopts a multi-agent architecture with eight specialized agents that collaboratively ground papers to code and dependencies, initialize and repair execution environments, track long-horizon experiments, generate and schedule optimization ideas, and supervise validity to avoid spurious gains. We evaluate AutoSOTA on recent research papers collected from eight top-tier AI conferences under filters for code availability and execution cost. Across these papers, AutoSOTA achieves strong end-to-end performance in both automated replication and subsequent optimization. Specifically, it successfully discovers 105 new SOTA models that surpass the original reported methods, averaging approximately five hours per paper. Case studies spanning LLM, NLP, computer vision, time series, and optimization further show that the system can move beyond routine hyperparameter tuning to identify architectural innovation, algorithmic redesigns, and workflow-level improvements. These results suggest that end-to-end research automation can serve not only as a performance optimizer, but also as a new form of research infrastructure that reduces repetitive experimental burden and helps redirect human attention toward higher-level scientific creativity.