To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

arXiv cs.CL / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that merely injecting retrieved private-library API documentation into LLM context is not enough for reliable private-library API invocation during code generation.
  • It introduces PriCoder, which trains LLMs for private-library API use by automatically synthesizing training data modeled as a graph and refining it via Progressive Graph Evolution and Multidimensional Graph Pruning.
  • The authors evaluate PriCoder on three mainstream LLMs using newly built benchmarks with recently released, previously unfamiliar libraries to the models.
  • Results show PriCoder delivers substantial improvements (often 20%+ in pass@1) for private-library-oriented code generation while minimally affecting general code generation performance.
  • PriCoder’s code and benchmarks are released publicly to support further research and replication.

Abstract

Large Language Models (LLMs) have shown strong potential for code generation, yet they remain limited in private-library-oriented code generation, where the goal is to generate code using APIs from private libraries. Existing approaches mainly rely on retrieving private-library API documentation and injecting relevant knowledge into the context at inference time. However, our study shows that this is insufficient: even given accurate required knowledge, LLMs still struggle to invoke private-library APIs effectively. To address this limitation, we propose PriCoder, an approach that teaches LLMs to invoke private-library APIs through automatically synthesized data. Specifically, PriCoder models private-library data synthesis as the construction of a graph, and alternates between two graph operators: (1) Progressive Graph Evolution, which improves data diversity by progressively synthesizing more diverse training samples from basic ones, and (2) Multidimensional Graph Pruning, which improves data quality through a rigorous filtering pipeline. To support rigorous evaluation, we construct two new benchmarks based on recently released libraries that are unfamiliar to the tested models. Experiments on three mainstream LLMs show that PriCoder substantially improves private-library-oriented code generation, yielding gains of over 20% in pass@1 in many settings, while causing negligible impact on general code generation capability. Our code and benchmarks are publicly available at https://github.com/eniacode/PriCoder.