AI Navigate

SWE-QA-Pro: A Representative Benchmark and Scalable Training Recipe for Repository-Level Code Understanding

arXiv cs.CL / 3/18/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • SWE-QA-Pro introduces a repository-level code understanding benchmark with diverse long-tail repositories and executable environments to curb memorization by LLMs.
  • The benchmark uses issue-driven clustering for topical balance and a difficulty calibration that filters out questions solvable by direct-answer baselines, highlighting agentic codebase exploration.
  • The authors present a scalable synthetic data pipeline and a two-stage training recipe (SFT followed by RLAIF) to enable smaller models to learn tool usage and reasoning.
  • Empirically, a Qwen3-8B model trained with this recipe surpasses GPT-4o by 2.3 points on SWE-QA-Pro and narrows the gap to state-of-the-art proprietary models, validating the approach.

Abstract

Agentic repository-level code understanding is essential for automating complex software engineering tasks, yet the field lacks reliable benchmarks. Existing evaluations often overlook the long tail topics and rely on popular repositories where Large Language Models (LLMs) can cheat via memorized knowledge. To address this, we introduce SWE-QA-Pro, a benchmark constructed from diverse, long-tail repositories with executable environments. We enforce topical balance via issue-driven clustering to cover under-represented task types and apply a rigorous difficulty calibration process: questions solvable by direct-answer baselines are filtered out. This results in a dataset where agentic workflows significantly outperform direct answering (e.g., a ~13-point gap for Claude Sonnet 4.5), confirming the necessity of agentic codebase exploration. Furthermore, to tackle the scarcity of training data for such complex behaviors, we propose a scalable synthetic data pipeline that powers a two-stage training recipe: Supervised Fine-Tuning (SFT) followed by Reinforcement Learning from AI Feedback (RLAIF). This approach allows small open models to learn efficient tool usage and reasoning. Empirically, a Qwen3-8B model trained with our recipe surpasses GPT-4o by 2.3 points on SWE-QA-Pro and substantially narrows the gap to state-of-the-art proprietary models, demonstrating both the validity of our evaluation and the effectiveness of our agentic training workflow.