Bias Inheritance in Neural-Symbolic Discovery of Constitutive Closures Under Function-Class Mismatch
arXiv cs.LG / 4/3/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies data-driven discovery of constitutive closures (diffusion and reaction laws) for nonlinear reaction–diffusion PDEs from spatiotemporal observations while avoiding misleading “low residual = correct physics” conclusions.
- It proposes a three-stage neural-symbolic pipeline: learn noise-robust weak-form numerical surrogates under physical constraints, compress them into interpretable symbolic families (polynomial/rational/saturation), and validate by explicit forward re-simulation on unseen initial conditions.
- Numerical experiments show that with matched function libraries, classical weak polynomial baselines can already be near-correct reference estimators and neural surrogates do not automatically outperform them.
- With function-class mismatch, the neural surrogates add necessary flexibility and can be compressed into compact symbolic laws with minimal rollout degradation.
- The authors identify “bias inheritance,” where symbolic compression fails to correct constitutive bias; the symbolic closure’s true error tracks the neural surrogate’s error, implying the main bottleneck is the initial inverse problem rather than the symbolic step.
Related Articles

90000 Tech Workers Got Fired This Year and Everyone Is Blaming AI but Thats Not the Whole Story
Dev.to

Microsoft’s $10 Billion Japan Bet Shows the Next AI Battleground Is National Infrastructure
Dev.to

TII Releases Falcon Perception: A 0.6B-Parameter Early-Fusion Transformer for Open-Vocabulary Grounding and Segmentation from Natural Language Prompts
MarkTechPost

The house asked me a question
Dev.to

Precision Clip Selection: How AI Suggests Your In and Out Points
Dev.to