Translation Invariance of Neural Operators for the FitzHugh-Nagumo Model
arXiv cs.LG / 3/19/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- This paper studies translation invariance of Neural Operators (NOs) applied to the FitzHugh-Nagumo model to capture its stiff spatio-temporal dynamics.
- It benchmarks seven NO architectures: Convolutional Neural Operators (CNOs), Deep Operator Networks (DONs), DONs with CNN encoder (DONs-CNN), Proper Orthogonal Decomposition DONs (POD-DONs), Fourier Neural Operators (FNOs), Tucker Tensorized FNOs (TFNOs), Localized Neural Operators (LocalNOs).
- CNOs perform well on translated dynamics but require higher training costs; FNOs achieve the lowest training error but have the highest inference time; DONs and their variants train and infer efficiently but do not generalize well to translated test data.
- The study provides a comprehensive benchmark highlighting the current capabilities and limitations of NOs in capturing complex ionic model dynamics and informs future research on dataset-efficient training.
Related Articles
How to Build an AI Team: The Solopreneur Playbook
Dev.to
CrewAI vs AutoGen vs LangGraph: Which Agent Framework to Use
Dev.to

14 Best Self-Hosted Claude Alternatives for AI and Coding in 2026
Dev.to
[P] Finetuned small LMs to VLM adapters locally and wrote a short article about it
Reddit r/MachineLearning
Experiment: How far can a 28M model go in business email generation?
Reddit r/LocalLLaMA