AI Navigate

予測型パラメトリック相場モデルのための物理インフォームドニューラルオペレータ

arXiv cs.LG / 2026/3/11

Models & Research

要点

  • 本研究は、物理的制約を学習プロセスに直接組み込むことで予測型パラメトリック相場モデルの性能を向上させる物理インフォームドニューラルオペレータフレームワークPF-PINOを提案する。
  • PF-PINOは、損失関数に相場の支配方程式の残差を統合し、トレーニング時に物理法則を強制することでモデルの精度と安定性を高める。
  • 電気化学腐食、樹枝状結晶の固化、スピノーダル分解といったベンチマーク相場問題で検証され、PF-PINOは従来のフーリエニューラルオペレータ(FNO)よりも精度、一般化性能、長期安定性で優れていることが示された。
  • この手法は、純粋なデータ駆動型ニューラルオペレータの限界を克服し、材料科学におけるハイスループットパラメトリック研究を加速する計算効率が高く堅牢なツールを提供する。
  • 本研究は、複雑な界面進化現象のモデリングにおける物理インフォームドニューラルネットワークを用いた科学的機械学習の進展を示している。

Computer Science > Machine Learning

arXiv:2603.09693 (cs)
[Submitted on 10 Mar 2026]

Title:Physics-informed neural operator for predictive parametric phase-field modelling

View a PDF of the paper titled Physics-informed neural operator for predictive parametric phase-field modelling, by Nanxi Chen and 2 other authors
View PDF HTML (experimental)
Abstract:Predicting the microstructural and morphological evolution of materials through phase-field modelling is computationally intensive, particularly for high-throughput parametric studies. While neural operators such as the Fourier neural operator (FNO) show promise in accelerating the solution of parametric partial differential equations (PDEs), the lack of explicit physical constraints, may limit generalisation and long-term accuracy for complex phase-field dynamics. Here, we develop a physics-informed neural operator framework to learn parametric phase-field PDEs, namely PF-PINO. By embedding the residuals of phase-field governing equations into the data-fidelity loss function, our framework effectively enforces physical constraints during training. We validate PF-PINO against benchmark phase-field problems, including electrochemical corrosion, dendritic crystal solidification, and spinodal decomposition. Our results demonstrate that PF-PINO significantly outperforms conventional FNO in accuracy, generalisation capability, and long-term stability. This work provides a robust and efficient computational tool for phase-field modelling and highlights the potential of physics-informed neural operators to advance scientific machine learning for complex interfacial evolution problems.
Subjects: Machine Learning (cs.LG); Materials Science (cond-mat.mtrl-sci); Computational Physics (physics.comp-ph)
Cite as: arXiv:2603.09693 [cs.LG]
  (or arXiv:2603.09693v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09693
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Nanxi Chen [view email]
[v1] Tue, 10 Mar 2026 14:00:00 UTC (3,902 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.