AgentFloor: How Far Up the tool use Ladder Can Small Open-Weight Models Go?

arXiv cs.AI / 5/4/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces AgentFloor, a deterministic 30-task benchmark that grades agent capabilities on a six-tier ladder from instruction following to long-horizon planning under persistent constraints.
  • The authors evaluate 16 open-weight models (0.27B–32B parameters) and also include GPT-5, running 16,542 scored trials to test how far “small” models can go in real agent workflows.
  • Results indicate there is a practical boundary: smaller/mid-sized open-weight models are already strong enough for the short-horizon, structured tool-use portion that dominates many agent pipelines.
  • The strongest open-weight model overall matches GPT-5 on the benchmark while being substantially cheaper and faster, but frontier models still lead most clearly on long-horizon tasks requiring sustained coordination and reliable constraint tracking.
  • The study also finds the gap is not explained by scale alone, with model-specific failures sometimes improved by targeted interventions; the authors recommend routing routine actions to smaller open-weight models and reserving frontier models for the narrower set of tasks needing deeper planning and control.

Abstract

Production agentic systems make many model calls per user request, and most of those calls are short, structured, and routine. This raises a practical routing question that existing evaluations do not directly answer: which parts of an agent workflow truly require large frontier intelligence, and which can be handled by smaller models? We introduce AgentFloor, a deterministic 30-task benchmark organized as a six-tier capability ladder, spanning instruction following, tool use, multi-step coordination, and long-horizon planning under persistent constraints. We evaluate 16 open-weight models, from 0.27B to 32B parameters, alongside GPT-5 across 16,542 scored runs. Our results reveal a clear boundary of model necessity. Small and mid-sized open-weight models are already sufficient for much of the short-horizon, structured tool use work that dominates real agent pipelines, and in aggregate, the strongest open-weight model matches GPT-5 on our benchmark while being substantially cheaper and faster to run. The gap appears most clearly on long-horizon planning tasks that require sustained coordination and reliable constraint tracking over many steps, where frontier models still hold an advantage, though neither side reaches strong reliability. We also find that this boundary is not explained by scale alone: some failures respond to targeted interventions, but the effects are model-specific rather than universal. These findings suggest a practical design principle for agentic systems: use smaller open-weight models for the broad base of routine actions, and reserve large frontier models for the narrower class of tasks that truly demand deeper planning and control. We release the benchmark, harness, sweep configurations, and full run corpus.