ReCUBE: Evaluating Repository-Level Context Utilization in Code Generation

arXiv cs.AI / 3/30/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces ReCUBE, a new benchmark that isolates and measures how well LLMs utilize repository-level context by having models reconstruct a masked file using only the rest of the repository plus dependency specs and documentation.
  • It evaluates generated code using usage-aware tests that cover both internal logic and cross-file integration, aiming to better reflect real-world software behavior than existing coding benchmarks.
  • Results across eight models and multiple settings indicate that repository-level context utilization is still difficult even for state-of-the-art systems, with GPT-5 reaching a 37.57% strict pass rate in the full-context setting.
  • To improve agentic repository exploration, the authors propose the Caller-Centric Exploration (CCE) toolkit based on dependency graphs, which can guide agents to the most relevant caller files and improves strict pass rates by up to 7.56%.
  • The ReCUBE benchmark, code, and evaluation framework are released as open source for the research community.

Abstract

Large Language Models (LLMs) have recently emerged as capable coding assistants that operate over large codebases through either agentic exploration or full-context generation. Existing benchmarks capture a broad range of coding capabilities, such as resolving GitHub issues, but none of them directly isolate and measure how effectively LLMs leverage repository-level context during code generation. To address this, we introduce ReCUBE, a benchmark in which LLMs reconstruct a masked file within a real-world repository, using all remaining source files, dependency specifications, and documentation as their only source of context. ReCUBE evaluates reconstructed code with usage-aware test cases that simulate both internal module logic and external cross-file integration, reflecting real-world software usage patterns. We further propose the Caller-Centric Exploration (CCE) toolkit, a set of dependency graph-based tools that can be integrated into agentic frameworks to guide agents toward the most relevant caller files during repository exploration. Experiments across eight models in four settings show that repository-level context utilization remains highly challenging even for state-of-the-art models, with GPT-5 achieving only 37.57% strict pass rate in the full-context setting. Agents augmented with our CCE toolkit consistently outperform all baselines across all evaluated models, with improvements of up to 7.56% in strict pass rate. We release our benchmark, code, and evaluation framework as open source for the NLP research community.