Translation Invariance of Neural Operators for the FitzHugh-Nagumo Model
arXiv cs.LG / 3/19/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- This paper studies translation invariance of Neural Operators (NOs) applied to the FitzHugh-Nagumo model to capture its stiff spatio-temporal dynamics.
- It benchmarks seven NO architectures: Convolutional Neural Operators (CNOs), Deep Operator Networks (DONs), DONs with CNN encoder (DONs-CNN), Proper Orthogonal Decomposition DONs (POD-DONs), Fourier Neural Operators (FNOs), Tucker Tensorized FNOs (TFNOs), Localized Neural Operators (LocalNOs).
- CNOs perform well on translated dynamics but require higher training costs; FNOs achieve the lowest training error but have the highest inference time; DONs and their variants train and infer efficiently but do not generalize well to translated test data.
- The study provides a comprehensive benchmark highlighting the current capabilities and limitations of NOs in capturing complex ionic model dynamics and informs future research on dataset-efficient training.
Related Articles
AgentDesk vs Hiring Another Consultant: A Cost Comparison
Dev.to
"Why Your AI Agent Needs a System 1"
Dev.to
When should we expect TurboQuant?
Reddit r/LocalLLaMA
AI as Your Customs Co-Pilot: Automating HS Code Chaos in Southeast Asia
Dev.to
The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions
Dev.to