AI Navigate

[R] ZeroProofML: 'Train on Smooth, Infer on Strict' for undefined targets in scientific ML

Reddit r/MachineLearning / 3/15/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • ZeroProofML proposes a semantic approach to undefined targets in scientific ML by using Signed Common Meadows where division by zero yields an absorptive element ⊥ that propagates through computations.
  • It implements a training paradigm of 'Train on Smooth, Infer on Strict' in which training uses smooth N/D pairs to keep gradients flowing, while inference switches to strict decoding to emit an explicit state when crossing the singular boundary.
  • Rational neural nets provide a natural inductive bias for pole-like growth and sharp transitions, and ZeroProofML adds a semantic layer so outputs do not have to rely on clipped finite surrogates near singularities.
  • In three domains—dose–response (pharma), RF filter extrapolation (electronics), and inverse kinematics (robotics)—the framework shows substantial gains (e.g., dramatically reduced false finite predictions in dose–response, higher peak-retention yield and lower phase error in RF extrapolation, and greatly reduced seed-to-seed variance in robotics), while acknowledging open optimization and bias-variance trade-offs in certain scenarios.

We're sharing ZeroProofML, a small framework for scientific ML problems where the target can be genuinely undefined or non-identifiable: poles, assay censoring boundaries, kinematic locks, etc. The underlying issue is division by zero. Not as a numerical bug, but as a semantic event that shows up whenever a learned rational function hits a pole, a normalization denominator vanishes, or a physical quantity becomes non-identifiable.

The motivating issue is semantic, not just numerical. A common fix for denominator pathologies is ε-regularization: replacing N/D with N/(D+ε). That often keeps training stable, but it also changes the meaning. A point that should decode to "undefined" becomes a large finite scalar instead. Our approach builds on Common Meadows, an algebraic framework from theoretical computer science (Bergstra & Tucker) where division is total: dividing by zero returns an absorptive element ⊥ that propagates through all subsequent operations. The specific variant we use is Signed Common Meadows (SCM), which additionally preserves sign/direction information at the singular boundary.

The practical difficulty is that ⊥ annihilates gradients, so you can't train directly in strict mode. Our solution is 'Train on Smooth, Infer on Strict': during training, the model works with smooth projective tuples ⟨N, D⟩ so gradients still flow; at inference, we switch to strict decoding where the denominator crossing the singular boundary emits an explicit state rather than an ε-stabilized large number. Rational neural nets already help with representation: they can model pole-like growth and sharp transitions more naturally than plain MLPs. ZeroProofML builds on that rational inductive bias, but adds a stricter semantic layer: near the singular boundary, the model does not have to return a clipped finite surrogate.

3 domains (10 seeds):

  • Dose–response (pharma): strict decoding reduces false finite predictions on censored inputs from 57.3% (rational+ε baseline) to about 1.2×10⁻⁴, with FN_in = 0.
  • RF filter extrapolation (electronics): under 33× OOD extrapolation in Q_f, the shared-denominator SCM model improves peak-retention yield from 39.8% to 77.3% and substantially reduces phase error.
  • Inverse kinematics (robotics): the projective parameterization reduces seed-to-seed variance by 31.8×

A few limitations:

  • In Dose, reconciling censored-direction supervision with high-quality regression is still an open optimization problem.
  • In robotics, there is a bias-variance trade-off.
  • For ordinary smooth regression problems, this is unnecessary overhead.

We claim that arithmetic design is an inductive bias, and in singular regimes it can matter whether the model represents division-by-zero explicitly.

Blog: https://domezsolt.substack.com/p/from-brahmagupta-to-backpropagation
Paper: https://zenodo.org/records/18944466
Code: https://gitlab.com/domezsolt/zeroproofml

Feedback and cooperation suggestions welcome!

submitted by /u/Temporary-Oven6788
[link] [comments]