On the role of memorization in learned priors for geophysical inverse problems
arXiv stat.ML / 3/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper analyzes memorization risks in learned priors trained on limited geophysical data for seismic inversion.
- It shows that the posterior under a memorized prior behaves like a reweighted empirical distribution, effectively a likelihood-weighted lookup of training examples.
- In diffusion models, memorization yields a Gaussian mixture prior, and linearizing the forward operator around training examples yields a Gaussian mixture posterior with widths and shifts governed by local Jacobians.
- The authors validate these predictions on a stylized inverse problem and illustrate the practical consequences through diffusion posterior sampling for full waveform inversion.
- These results highlight potential memorization effects when using data-driven priors in geophysics and suggest caution in training and applying such models with scarce data.
Related Articles

Interactive Web Visualization of GPT-2
Reddit r/artificial
Stop Treating AI Interview Fraud Like a Proctoring Problem
Dev.to
[R] Causal self-attention as a probabilistic model over embeddings
Reddit r/MachineLearning
The 5 software development trends that actually matter in 2026 (and what they mean for your startup)
Dev.to
InVideo AI Review: Fast Finished
Dev.to