Learning to Commit: Generating Organic Pull Requests via Online Repository Memory

arXiv cs.CL / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLM coding agents fail on real pull requests mainly due to “lack of organicity” rather than basic functional incorrectness, including mismatched conventions and violations of long-established architectural constraints.
  • It introduces “Learning to Commit,” which uses Online Repository Memory to learn project-specific change patterns from earlier commits instead of relying only on the latest repository snapshot.
  • The method performs supervised contrastive reflection by attempting to resolve historical issues, comparing predictions to oracle diffs, and distilling reusable patterns capturing coding style, internal API usage, and architectural invariants.
  • For new PR descriptions, the agent conditions its PR generation on the accumulated skills so the resulting changes better reflect the repository’s evolution and maintainers’ expectations.
  • Experiments on an expert-maintained repository with rich commit history evaluate on future merged PRs and show improved organicity scores across correctness, style consistency, internal API reuse, and modified-region plausibility.

Abstract

Large language model (LLM)-based coding agents achieve impressive results on controlled benchmarks yet routinely produce pull requests that real maintainers reject. The root cause is not functional incorrectness but a lack of organicity: generated code ignores project-specific conventions, duplicates functionality already provided by internal APIs, and violates implicit architectural constraints accumulated over years of development. Simply exposing an agent to the latest repository snapshot is not enough: the snapshot reveals the final state of the codebase, but not the repository-specific change patterns by which that state was reached. We introduce Learning to Commit, a framework that closes this gap through Online Repository Memory. Given a repository with a strict chronological split, the agent performs supervised contrastive reflection on earlier commits: it blindly attempts to resolve each historical issue, compares its prediction against the oracle diff, and distils the gap into a continuously growing set of skills-reusable patterns capturing coding style, internal API usage, and architectural invariants. When a new PR description arrives, the agent conditions its generation on these accumulated skills, producing changes grounded in the project's own evolution rather than generic pretraining priors. Evaluation is conducted on genuinely future, merged pull requests that could not have been seen during the skill-building phase, and spans multiple dimensions including functional correctness, code-style consistency, internal API reuse rate, and modified-region plausibility. Experiments on an expert-maintained repository with rich commit history show that Online Repository Memory effectively improves organicity scores on held-out future tasks.