$\texttt{YC-Bench}$: Benchmarking AI Agents for Long-Term Planning and Consistent Execution

arXiv cs.CL / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The article introduces YC-Bench, an open-source benchmark that tests LLM agents’ ability to plan and execute consistently over a simulated one-year startup horizon with hundreds of turns and partial observability.
  • Agents must handle compounding decision effects—managing employees, choosing task contracts, and maintaining profitability under adversarial clients and a growing payroll.
  • In evaluations of 12 models (proprietary and open source), only three models consistently outperform the $200K starting capital, with Claude Opus 4.6 leading on final funds at about $1.27M.
  • Scratchpad usage is identified as the strongest predictor of success despite context truncation, while failure is often driven by adversarial client detection (about 47% of bankruptcies).
  • The analysis shows frontier models still struggle with long-horizon coherence, exhibiting distinct failure modes such as over-parallelization, highlighting key capability gaps to address.

Abstract

As LLM agents tackle increasingly complex tasks, a critical question is whether they can maintain strategic coherence over long horizons: planning under uncertainty, learning from delayed feedback, and adapting when early mistakes compound. We introduce \texttt{YC-Bench}, a benchmark that evaluates these capabilities by tasking an agent with running a simulated startup over a one-year horizon spanning hundreds of turns. The agent must manage employees, select task contracts, and maintain profitability in a partially observable environment where adversarial clients and growing payroll create compounding consequences for poor decisions. We evaluate 12 models, both proprietary and open source, across 3 seeds each. Only three models consistently surpass the starting capital of \200K, with Claude Opus 4.6 achieving the highest average final funds at \$1.27 M, followed by GLM-5 at \$1.21 M at 11\times lower inference cost. Scratchpad usage, the sole mechanism for persisting information across context truncation, is the strongest predictor of success, and adversarial client detection is the primary failure mode, accounting for 47\% of bankruptcies. Our analysis reveals that frontier models still fail through distinct failure modes such as over-parallelization, demonstrating the capability gaps for long-horizon performance. \texttt{YC-Bench}$ is open-source, reproducible, and configurable.