AI Navigate

Upper Generalization Bounds for Neural Oscillators

arXiv cs.LG / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes neural oscillators derived from second-order ordinary differential equations combined with multilayer perceptrons (MLPs) for modeling dynamic nonlinear structural systems.
  • It provides upper probably approximately correct (PAC) generalization bounds using Rademacher complexity for approximating causal continuous operators and stable second-order dynamical systems.
  • The results show estimation errors grow polynomially with MLP size and time length, mitigating the curse of parametric complexity.
  • The study highlights that constraining MLP Lipschitz constants via regularization improves generalization, which is empirically validated through experiments on a nonlinear seismic response system.
  • Findings confirm that controlling MLP norms enhances neural oscillator performance, especially when training data is limited.

Computer Science > Machine Learning

arXiv:2603.09742 (cs)
[Submitted on 10 Mar 2026]

Title:Upper Generalization Bounds for Neural Oscillators

View a PDF of the paper titled Upper Generalization Bounds for Neural Oscillators, by Zifeng Huang and 3 other authors
View PDF HTML (experimental)
Abstract:Neural oscillators that originate from the second-order ordinary differential equations (ODEs) have shown competitive performance in learning mappings between dynamic loads and responses of complex nonlinear structural systems. Despite this empirical success, theoretically quantifying the generalization capacities of their neural network architectures remains undeveloped. In this study, the neural oscillator consisting of a second-order ODE followed by a multilayer perceptron (MLP) is considered. Its upper probably approximately correct (PAC) generalization bound for approximating causal and uniformly continuous operators between continuous temporal function spaces and that for approximating the uniformly asymptotically incrementally stable second-order dynamical systems are derived by leveraging the Rademacher complexity framework. The theoretical results show that the estimation errors grow polynomially with respect to both the MLP size and the time length, thereby avoiding the curse of parametric complexity. Furthermore, the derived error bounds demonstrate that constraining the Lipschitz constants of the MLPs via loss function regularization can improve the generalization ability of the neural oscillator. A numerical study considering a Bouc-Wen nonlinear system under stochastic seismic excitation validates the theoretically predicted power laws of the estimation errors with respect to the sample size and time length, and confirms the effectiveness of constraining MLPs' matrix and vector norms in enhancing the performance of the neural oscillator under limited training data.
Comments:
Subjects: Machine Learning (cs.LG); Dynamical Systems (math.DS); Machine Learning (stat.ML)
Cite as: arXiv:2603.09742 [cs.LG]
  (or arXiv:2603.09742v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09742
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Zifeng Huang [view email]
[v1] Tue, 10 Mar 2026 14:47:23 UTC (261 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.