Physics-Guided Transformer (PGT): Physics-Aware Attention Mechanism for PINNs
arXiv cs.LG / 3/31/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Physics-Guided Transformer (PGT), a physics-aware neural architecture that injects PDE structure directly into the self-attention mechanism rather than using soft penalty terms as in many physics-informed methods.
- PGT adds a heat-kernel-derived bias to attention logits to encode diffusion dynamics and temporal causality, enabling query coordinates to attend to physics-conditioned context tokens.
- The model’s decoding uses a FiLM-modulated sinusoidal implicit network to adaptively control spectral response, targeting more stable and physically consistent reconstructions.
- Experiments on the 1D heat equation and 2D incompressible Navier–Stokes show markedly improved performance in data-scarce settings, including a reported 1D relative L2 error of 5.9e-3 with 100 observations.
- On the 2D cylinder wake problem, PGT is reported to achieve both low PDE residual and competitive reconstruction error, outperforming approaches that optimize only one objective.
Related Articles

Black Hat Asia
AI Business
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

Claude Code's Entire Source Code Was Just Leaked via npm Source Maps — Here's What's Inside
Dev.to

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to