ニューラル振動子の上界一般化限界

arXiv cs.LG / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • 本論文は、動的非線形構造システムのモデリングのために、2次常微分方程式と多層パーセプトロン(MLP)を組み合わせたニューラル振動子を解析している。
  • 因果的な連続作用素および安定な2次動的システムの近似に対して、Rademacher複雑度を用いた上界の概算確率的正当化(PAC)一般化限界を提供している。
  • 結果は、推定誤差がMLPのサイズおよび時間長に対して多項式的に増加し、パラメトリック複雑度の呪いを緩和することを示している。
  • MLPのリプシッツ定数を正則化によって制約することが一般化性能を向上させることが示されており、非線形地震応答システムに関する実験で経験的に検証されている。
  • MLPのノルム制御により、特に訓練データが限られている場合に、ニューラル振動子の性能が向上することが確認された。

Computer Science > Machine Learning

arXiv:2603.09742 (cs)
[Submitted on 10 Mar 2026]

Title:Upper Generalization Bounds for Neural Oscillators

View a PDF of the paper titled Upper Generalization Bounds for Neural Oscillators, by Zifeng Huang and 3 other authors
View PDF HTML (experimental)
Abstract:Neural oscillators that originate from the second-order ordinary differential equations (ODEs) have shown competitive performance in learning mappings between dynamic loads and responses of complex nonlinear structural systems. Despite this empirical success, theoretically quantifying the generalization capacities of their neural network architectures remains undeveloped. In this study, the neural oscillator consisting of a second-order ODE followed by a multilayer perceptron (MLP) is considered. Its upper probably approximately correct (PAC) generalization bound for approximating causal and uniformly continuous operators between continuous temporal function spaces and that for approximating the uniformly asymptotically incrementally stable second-order dynamical systems are derived by leveraging the Rademacher complexity framework. The theoretical results show that the estimation errors grow polynomially with respect to both the MLP size and the time length, thereby avoiding the curse of parametric complexity. Furthermore, the derived error bounds demonstrate that constraining the Lipschitz constants of the MLPs via loss function regularization can improve the generalization ability of the neural oscillator. A numerical study considering a Bouc-Wen nonlinear system under stochastic seismic excitation validates the theoretically predicted power laws of the estimation errors with respect to the sample size and time length, and confirms the effectiveness of constraining MLPs' matrix and vector norms in enhancing the performance of the neural oscillator under limited training data.
Comments:
Subjects: Machine Learning (cs.LG); Dynamical Systems (math.DS); Machine Learning (stat.ML)
Cite as: arXiv:2603.09742 [cs.LG]
  (or arXiv:2603.09742v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09742
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Zifeng Huang [view email]
[v1] Tue, 10 Mar 2026 14:47:23 UTC (261 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.