AI Navigate

A Grammar of Machine Learning Workflows

arXiv cs.LG / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • A new grammar for ML workflows decomposes the supervised learning lifecycle into seven kernel primitives connected by a typed DAG to prevent data leakage at call time.
  • The approach introduces four hard constraints, including a runtime-enforced evaluate/assess boundary that rejects repeated test-set assessment via a guard on a distinct Evidence type.
  • A companion study across 2,047 experiments quantifies leakage impact, showing selection leakage inflates performance by dz = 0.93 and memorization leakage by dz = 0.53–1.11.
  • Python, R, and Julia implementations are provided, and the appendix allows others to build a conforming version.

Abstract

Data leakage affected 294 published papers across 17 scientific fields (Kapoor & Narayanan, 2023). The dominant response has been documentation: checklists, linters, best-practice guides. Documentation does not prevent these failures. This paper proposes a structural remedy: a grammar that decomposes the supervised learning lifecycle into 7 kernel primitives connected by a typed directed acyclic graph (DAG), with four hard constraints that reject the two most damaging leakage classes at call time. The grammar's core contribution is the terminal assess constraint: a runtime-enforced evaluate/assess boundary where repeated test-set assessment is rejected by a guard on a nominally distinct Evidence type. A companion study across 2,047 experimental instances quantifies why this matters: selection leakage inflates performance by d_z = 0.93 and memorization leakage by d_z = 0.53-1.11. Three separate implementations (Python, R, and Julia) confirm the claims. The appendix specification lets anyone build a conforming version.