Learning to Commit: Generating Organic Pull Requests via Online Repository Memory
arXiv cs.CL / 3/30/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that LLM coding agents fail on real pull requests mainly due to “lack of organicity” rather than basic functional incorrectness, including mismatched conventions and violations of long-established architectural constraints.
- It introduces “Learning to Commit,” which uses Online Repository Memory to learn project-specific change patterns from earlier commits instead of relying only on the latest repository snapshot.
- The method performs supervised contrastive reflection by attempting to resolve historical issues, comparing predictions to oracle diffs, and distilling reusable patterns capturing coding style, internal API usage, and architectural invariants.
- For new PR descriptions, the agent conditions its PR generation on the accumulated skills so the resulting changes better reflect the repository’s evolution and maintainers’ expectations.
- Experiments on an expert-maintained repository with rich commit history evaluate on future merged PRs and show improved organicity scores across correctness, style consistency, internal API reuse, and modified-region plausibility.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Black Hat Asia
AI Business

Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer
Simon Willison's Blog
Beyond the Chatbot: Engineering Multi-Agent Ecosystems in 2026
Dev.to

I missed the "fun" part in software development
Dev.to

The Billion Dollar Tax on AI Agents
Dev.to