Beyond the Training Distribution: Mapping Generalization Boundaries in Neural Program Synthesis

arXiv cs.LG / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a tightly controlled neural program synthesis evaluation setup to measure genuine generalization, avoiding misleading effects from data contamination and opaque training corpora.
  • By enumerating and testing millions of unique programs under a domain-specific arithmetic grammar, the authors build interpretable syntactic and semantic metric spaces to analyze distribution shifts.
  • The results show that “density generalization” improves out-of-distribution performance when training samples are diverse across both semantic and syntactic spaces.
  • In contrast, “support generalization” is weak: transformers drop by more than 30% when required to generate syntactically novel programs, indicating difficulty with extrapolation.
  • Scaling compute yields only log-linear improvements, leading the authors to argue that robust generalization likely depends on maximizing training diversity across multiple manifolds and adopting new search-based methods to overcome scaling bottlenecks.

Abstract

Large-scale transformers achieve impressive results on program synthesis benchmarks, yet their true generalization capabilities remain obscured by data contamination and opaque training corpora. To rigorously assess whether models are truly generalizing or merely retrieving memorized templates, we introduce a strictly controlled program synthesis environment based on a domain-specific arithmetic grammar. By systematically enumerating and evaluating millions of unique programs, we construct interpretable syntactic and semantic metric spaces. This allows us to precisely map data distributions and sample train and test splits that isolate specific distributional shifts. Our experiments demonstrate that optimizing density generalization -- through diverse sampling over both semantic and syntactic spaces -- induces robust out-of-distribution generalization. Conversely, evaluating support generalization reveals that transformers severely struggle with extrapolation, experiencing a performance drop of over 30% when forced to generate syntactically novel programs. While steadily scaling up compute improves generalization, the gains follow a strictly log-linear relationship. We conclude that robust generalization requires maximizing training diversity across multiple manifolds, and our findings indicate the necessity for novel search-based approaches to break through current log-linear scaling bottlenecks.