Programming with Data: Test-Driven Data Engineering for Self-Improving LLMs from Raw Corpora

arXiv cs.AI / 4/29/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a core AI challenge: fine-tuning LLMs on domain corpora improves performance, but lacks feedback to diagnose which data issues cause failures on domain tasks.
  • It proposes “Programming with Data,” mapping the data-engineering lifecycle to the software development lifecycle by using a structured knowledge representation as the shared basis for both training and evaluation.
  • In this framework, training data acts like source code, model training corresponds to compilation, benchmarking becomes unit testing, and failure-driven data repair becomes debugging that targets specific concept gaps and reasoning-chain breaks.
  • The authors report that iterative repair cycles yield consistent improvements across different model scales and architectures while preserving general capabilities, and they release open resources including a structured knowledge base, benchmark suite, and training corpus.
  • They demonstrate the approach across sixteen disciplines spanning natural sciences, engineering, biomedicine, and social sciences, aiming to make the link between training data and model behavior reliably traceable and systematically fixable.

Abstract

Reliably transferring specialized human knowledge from text into large language models remains a fundamental challenge in artificial intelligence. Fine-tuning on domain corpora has enabled substantial capability gains, but the process operates without feedback: when a model fails on a domain task, there is no method to diagnose what is deficient in the training data, and the only recourse is to add more data indiscriminately. Here we show that when a structured knowledge representation extracted from the source corpus serves as the shared foundation for both training data and evaluation, the complete data-engineering lifecycle maps onto the software development lifecycle in a precise and operative way: training data becomes source code specifying what the model should learn, model training becomes compilation, benchmarking becomes unit testing, and failure-driven data repair becomes debugging. Under this correspondence, model failures decompose into concept-level gaps and reasoning-chain breaks that can be traced back to specific deficiencies in the data and repaired through targeted patches, with each repair cycle producing consistent improvements across model scales and architectures without degrading general capabilities. We formalize this principle as Programming with Data and instantiate it across sixteen disciplines spanning the natural sciences, engineering, biomedicine, and the social sciences, releasing a structured knowledge base, benchmark suite, and training corpus as open resources. By demonstrating that the relationship between training data and model behaviour is structurally traceable and systematically repairable, this work establishes a principled foundation for the reliable engineering of human expertise into language models.