Automatically Generating Hard Math Problems from Hypothesis-Driven Error Analysis

arXiv cs.AI / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an AI-driven pipeline that uses hypothesis-driven error analysis to pinpoint the specific math concepts and skills where LLMs make mistakes, enabling targeted benchmark creation rather than generic category-based sets.
  • It claims the generation quality is linked to “hypothesis accuracy,” with benchmarks derived from the most accurate hypotheses producing significantly harder problems and lowering Llama-3.3-70B-Instruct accuracy to about 45% versus 77% on the original MATH benchmark.
  • The approach is presented as scalable and more adaptable than prior automatic benchmark generation methods, aimed at keeping pace with rapid LLM progress and reducing overfitting to static benchmarks.
  • The authors argue the pipeline can extend beyond math to probe LLM capabilities in other domains, supporting broader investigation of model weaknesses through domain-specific targeting.

Abstract

Numerous math benchmarks exist to evaluate LLMs' mathematical capabilities. However, most involve extensive manual effort and are difficult to scale. Consequently, they cannot keep pace with LLM development or easily provide new instances to mitigate overfitting. Some researchers have proposed automatic benchmark generation methods, but few focus on identifying the specific math concepts and skills on which LLMs are error-prone, and most can only generate category-specific benchmarks. To address these limitations, we propose a new math benchmark generation pipeline that uses AI-generated hypotheses to identify the specific math concepts and skills that LLMs struggle with, and then generates new benchmark problems targeting these weaknesses. Experiments show that hypothesis accuracy positively correlates with the difficulty of the generated problems: problems generated from the most accurate hypotheses reduce Llama-3.3-70B-Instruct's accuracy to as low as 45%, compared to 77% on the original MATH benchmark. Furthermore, our pipeline is highly adaptable and can be applied beyond math to explore a wide range of LLM capabilities, making it a valuable tool for investigating how LLMs perform across different domains.