Frontier-Eng: Benchmarking Self-Evolving Agents on Real-World Engineering Tasks with Generative Optimization

arXiv cs.AI / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Frontier-Eng, a human-verified benchmark designed to evaluate self-evolving AI agents on real-world engineering tasks framed as generative optimization rather than simple pass/fail objectives.
  • Frontier-Eng uses an iterative propose–execute–evaluate loop with executable verifiers and industrial-grade simulators, providing continuous reward signals while enforcing hard feasibility constraints within a fixed interaction budget.
  • The benchmark covers 47 tasks across five engineering categories and evaluates eight frontier language models using representative search frameworks.
  • Claude 4.6 Opus delivers the most robust performance overall, but the results indicate the benchmark remains challenging for all tested models.
  • The authors report a dual power-law decay in improvement frequency and magnitude and find that depth is more critical than width for achieving hard-won improvements under limited budgets.

Abstract

Current LLM agent benchmarks, which predominantly focus on binary pass/fail tasks such as code generation or search-based question answering, often neglect the value of real-world engineering that is often captured through the iterative optimization of feasible designs. To this end, we introduce Frontier-Eng, a human-verified benchmark for generative optimization -- an iterative propose-execute-evaluate loop in which an agent generates candidate artifacts, receives executable verifier feedback, and revises them under a fixed interaction budget -- spanning 47 tasks across five broad engineering categories. Unlike previous suites, Frontier-Eng tasks are grounded in industrial-grade simulators and verifiers that provide continuous reward signals and enforce hard feasibility constraints under constrained budgets. We evaluate eight frontier language models using representative search frameworks, finding that while Claude 4.6 Opus achieves the most robust performance, the benchmark remains challenging for all models. Our analysis suggests a dual power-law decay in improvement frequency (\sim 1/iteration) and magnitude (\sim 1/improvement count). We further show that although width improves parallelism and diversity, depth remains crucial for hard-won improvements under a fixed budget. Frontier-Eng establishes a new standard for assessing the capacity of AI agents to integrate domain knowledge with executable feedback to solve complex, open-ended engineering problems.