Posterior-First Neural PDE Simulation: Inferring Hidden Problem State from a Single Field

arXiv cs.LG / 5/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that neural PDE simulators deployed with only one observed field can suffer from “deterministic collapse,” where different hidden latent states map to the same predictor, harming rollout reliability.
  • It proposes “posterior-first” neural PDE simulation: infer a posterior distribution over a minimal task-sufficient hidden problem state, then condition future prediction on that posterior.
  • The authors provide a theoretical account linking the learning target and failure mode, showing that downstream Bayes values factor through the inferred posterior and that refinement labels enable learning via proper scoring rules.
  • In synthetic and PDEBench metadata-hidden experiments, posterior recovery reduces prediction error and narrows the gap to an oracle, with pooled rollout nRMSE dropping from 0.175 to 0.132 (recovering 59.4% of the direct-to-oracle gap).
  • Overall, the work suggests single-observation neural PDE simulation should prioritize posterior inference rather than using a monolithic field-to-future deterministic model.

Abstract

Neural PDE simulators often receive only a single observed field at deployment. In this setting, a field-to-future predictor can collapse distinct latent problem states into the same deterministic interface, losing the ambiguity needed for reliable rollout and downstream decisions. We propose posterior-first neural PDE simulation: first infer a posterior over the minimal task-sufficient problem state, then condition prediction on that posterior. The resulting theory connects the object, the learning target, and the failure mode: Bayes downstream values factor through this posterior, refinement labels make it learnable by proper scoring rules, and deterministic collapse incurs an ambiguity barrier whenever the true posterior is non-Dirac. Synthetic exact-ambiguity experiments show that point-versus-posterior gaps track the predicted barrier. On metadata-hidden PDEBench tasks, posterior recovery reduces pooled rollout nRMSE from 0.175 to 0.132, closing 59.4% of the direct-to-oracle gap. These results suggest that single-observation neural PDE simulation should be posterior-first rather than monolithic field-to-future prediction.