Agentic Frameworks for Reasoning Tasks: An Empirical Study

arXiv cs.AI / 4/21/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study empirically compares 22 popular agentic frameworks on three reasoning benchmarks (BBH, GSM8K, and ARC) using a unified evaluation setup, assessing accuracy, execution time, computational cost, and cross-benchmark consistency.
  • Nineteen of the 22 frameworks successfully completed all three benchmarks, and 12 of them achieved stable performance with mean accuracy around 74.6–75.9%, running time of 4–6 seconds per task, and about 0.14–0.18 cents per task.
  • The main drivers of weaker performance were orchestration issues rather than inherent reasoning limitations, including uncontrolled context/memory growth (e.g., Camel), costly retry loops from extraction failures (e.g., Upsonic), and API quota exhaustion from iterative interactions that increased prompt length (e.g., AutoGen, Mastra).
  • Mathematical reasoning performance was notably lower: GSM8K mean accuracy was 44.35%, versus about 89.8% on BBH and 89.56% on ARC, indicating benchmark-dependent difficulty.
  • The authors conclude that selecting an agentic framework for reasoning-heavy software engineering should prioritize orchestration quality—especially memory control, failure handling, and cost management.

Abstract

Recent advances in agentic frameworks have enabled AI agents to perform complex reasoning and decision-making. However, evidence comparing their reasoning performance, efficiency, and practical suitability remains limited. To address this gap, we empirically evaluate 22 widely used agentic frameworks across three reasoning benchmarks: BBH, GSM8K, and ARC. The frameworks were selected from 1,200 GitHub repositories collected between January 2023 and July 2025 and organized into a taxonomy based on architectural design. We evaluated them under a unified setting, measuring reasoning accuracy, execution time, computational cost, and cross-benchmark consistency. Our results show that 19 of the 22 frameworks completed all three benchmarks. Among these, 12 showed stable performance, with mean accuracy of 74.6-75.9%, execution time of 4-6 seconds per task, and cost of 0.14-0.18 cents per task. Poorer results were mainly caused by orchestration problems rather than reasoning limits. For example, Camel failed to complete BBH after 11 days because of uncontrolled context growth, while Upsonic consumed USD 1,434 in one day because repeated extraction failures triggered costly retries. AutoGen and Mastra also exhausted API quotas through iterative interactions that increased prompt length without improving results. We also found a sharp drop in mathematical reasoning. Mean accuracy on GSM8K was 44.35%, compared with 89.80% on BBH and 89.56% on ARC. Overall, this study provides the first large-scale empirical comparison of agentic frameworks for reasoning-intensive software engineering tasks and shows that framework selection should prioritize orchestration quality, especially memory control, failure handling, and cost management.