Rethinking Math Reasoning Evaluation: A Robust LLM-as-a-Judge Framework Beyond Symbolic Rigidity

arXiv cs.AI / 4/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Mathematical reasoning benchmarks often rely on comparing a model’s final answer to ground truth, but symbolic (rule-based) verification can break down when solutions use different representations or formats.
  • The paper proposes an LLM-as-a-judge evaluation framework that more flexibly assesses correctness across varied mathematical expressions and answer styles.
  • The authors analyze failure cases of symbolic evaluation in two popular benchmarking frameworks (Lighteval and SimpleRL) and show that their approach improves reliability.
  • The improved evaluation method is presented as a way to obtain more trustworthy performance monitoring for models aimed at mathematical problem-solving and reasoning.

Abstract

Recent advancements in large language models have led to significant improvements across various tasks, including mathematical reasoning, which is used to assess models' intelligence in logical reasoning and problem-solving. Models are evaluated on mathematical reasoning benchmarks by verifying the correctness of the final answer against a ground truth answer. A common approach for this verification is based on symbolic mathematics comparison, which fails to generalize across diverse mathematical representations and solution formats. In this work, we offer a robust and flexible alternative to rule-based symbolic mathematics comparison. We propose an LLM-based evaluation framework for evaluating model-generated answers, enabling accurate evaluation across diverse mathematical representations and answer formats. We present failure cases of symbolic evaluation in two popular frameworks, Lighteval and SimpleRL, and compare them to our approach, demonstrating clear improvements over commonly used methods. Our framework enables more reliable evaluation and benchmarking, leading to more accurate performance monitoring, which is important for advancing mathematical problem-solving and intelligent systems.