Beyond Benchmarks: MathArena as an Evaluation Platform for Mathematics with LLMs

arXiv cs.CL / 5/4/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper argues that static math benchmarks are often too narrow, get saturated quickly, and are rarely updated, making it difficult to compare LLMs reliably over time.
  • It presents MathArena as a continuously maintained evaluation platform that extends the original benchmark beyond final-answer olympiad questions.
  • MathArena now spans a broader set of tasks, including proof-focused competitions, research-level arXiv questions, and formal proof generation in Lean.
  • The work emphasizes maintaining a consistent evaluation protocol across models while regularly adding new benchmarks as capabilities improve.
  • Reported results show frontier performance from GPT-5.5, reaching 98% on the 2026 USA Math Olympiad and 74% on research-level questions, underscoring the value of ongoing evaluation.

Abstract

Large language models (LLMs) are becoming increasingly capable mathematical collaborators, but static benchmarks are no longer sufficient for evaluating progress: they are often narrow in scope, quickly saturated, and rarely updated. This makes it hard to compare models reliably and track progress over time. Instead, we need evaluation platforms: continuously maintained systems that run, aggregate, and analyze evaluations across many benchmarks to give a comprehensive picture of model performance within a broad domain. In this work, we build on the original MathArena benchmark by substantially broadening its scope from final-answer olympiad problems to a continuously maintained evaluation platform for mathematical reasoning with LLMs. MathArena now covers a much wider range of tasks, including proof-based competitions, research-level arXiv problems, and formal proof generation in Lean. Additionally, we maintain a clear evaluation protocol for all models and regularly design new benchmarks as model capabilities improve to ensure that MathArena remains challenging. Notably, the strongest model, GPT-5.5, now reaches 98% on the 2026 USA Math Olympiad and 74% on research-level questions, showing that frontier models can now comfortably solve extremely challenging mathematical problems. This highlights the importance of continuously maintained evaluation platforms like MathArena to track the rapid progress of LLMs in mathematical reasoning.