Problems with Chinchilla Approach 2: Systematic Biases in IsoFLOP Parabola Fits

arXiv cs.LG / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that the widely used Chinchilla Approach 2 parabolic approximation can produce systematic biases in compute-optimal allocation estimates even on noise-free synthetic data.
  • When applied to published Llama 3 IsoFLOP results at open-frontier compute scales, the bias translates into a meaningful compute mismatch (about 6.5% of a $3.8×10^25$ FLOP budget) and about $\$1.4M in unnecessary compute at 50% H100 MFU.
  • The authors identify three main bias sources—IsoFLOP sampling grid width (Taylor approximation limits), uncentered IsoFLOP sampling, and loss-surface asymmetry (α ≠ β)—which worsen misallocation in multimodal settings.
  • While Chinchilla Approach 3 largely removes these biases, it has been criticized as data-inefficient, numerically unstable, and difficult to implement; the paper argues these concerns are unfounded or mitigable.
  • By exploiting a partially linear structure with Variable Projection, the authors propose an optimization procedure that is unbiased across five loss-surface parameters and is well-conditioned, analytically differentiable, and suitable for dense or exhaustive grid search, potentially replacing Approach 2 or scaling Approach 3.

Abstract

Chinchilla Approach 2 is among the most widely used methods for fitting neural scaling laws. Its parabolic approximation introduces systematic biases in compute-optimal allocation estimates, even on noise-free synthetic data. Applied to published Llama 3 IsoFLOP data at open frontier compute scales, these biases imply a parameter underallocation corresponding to 6.5% of the 3.8\times10^{25} FLOP training budget and \1.4M (90% CI: \$412K-\$2.9M) in unnecessary compute at 50% H100 MFU. Simulated multimodal model misallocations show even greater opportunity costs due to higher loss surface asymmetry. Three sources of this error are examined: IsoFLOP sampling grid width (Taylor approximation accuracy), uncentered IsoFLOP sampling, and loss surface asymmetry (\alpha eq \beta$). Chinchilla Approach 3 largely eliminates these biases but is often regarded as less data-efficient, numerically unstable, prone to local minima, and harder to implement. Each concern is shown to be unfounded or addressable, especially when the partially linear structure of the objective is exploited via Variable Projection, enabling unbiased inference on all five loss surface parameters through a two-dimensional optimization that is well-conditioned, analytically differentiable, and amenable to dense, or even exhaustive, grid search. It may serve as a more convenient replacement for Approach 2 or a more scalable alternative for adaptations of Approach 3 to richer scaling law formulations.