CNNモデルの一般化性能を推定するための正確なフラットネス測度

arXiv cs.LG / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • 本論文は、これまでにフルコネクションネットワーク向けに設計されたか、CNN特有の幾何学的構造を無視していた既存の測度の限界を克服し、畳み込みニューラルネットワーク(CNN)専用に設計された正確かつアーキテクチャに忠実なフラットネス測度を提案する。
  • グローバル平均プーリングの後に線形分類器を用いるCNNに対し、畳み込みカーネルに関するクロスエントロピー損失のヘッセ行列のトレースの閉形式表現を導出した。
  • 畳み込みやプーリング操作に特有のスケーリング対称性やフィルター間相互作用を考慮した、パラメータ化に配慮した相対的フラットネス測度を提案する。
  • 標準的な画像分類のベンチマークにおける実証的評価により、提案フラットネス測度がCNNモデルの一般化性能を効果的に評価・比較できることを示した。
  • この測度は実際のCNNの一般化能力向上のために、アーキテクチャ設計や学習の選択を指導するツールとして活用可能である。

Computer Science > Machine Learning

arXiv:2603.09016 (cs)
[Submitted on 9 Mar 2026]

Title:An accurate flatness measure to estimate the generalization performance of CNN models

View a PDF of the paper titled An accurate flatness measure to estimate the generalization performance of CNN models, by Rahman Taleghani and 2 other authors
View PDF HTML (experimental)
Abstract:Flatness measures based on the spectrum or the trace of the Hessian of the loss are widely used as proxies for the generalization ability of deep networks. However, most existing definitions are either tailored to fully connected architectures, relying on stochastic estimators of the Hessian trace, or ignore the specific geometric structure of modern Convolutional Neural Networks (CNNs). In this work, we develop a flatness measure that is both exact and architecturally faithful for a broad and practically relevant class of CNNs. We first derive a closed-form expression for the trace of the Hessian of the cross-entropy loss with respect to convolutional kernels in networks that use global average pooling followed by a linear classifier. Building on this result, we then specialize the notion of relative flatness to convolutional layers and obtain a parameterization-aware flatness measure that properly accounts for the scaling symmetries and filter interactions induced by convolution and pooling. Finally, we empirically investigate the proposed measure on families of CNNs trained on standard image-classification benchmarks. The results obtained suggest that the proposed measure can serve as a robust tool to assess and compare the generalization performance of CNN models, and to guide the design of architecture and training choices in practice.
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Neural and Evolutionary Computing (cs.NE)
MSC classes: 68T07, 62M45, 65F30, 68T05, 49Q12
Cite as: arXiv:2603.09016 [cs.LG]
  (or arXiv:2603.09016v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09016
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Rahman Taleghani [view email]
[v1] Mon, 9 Mar 2026 23:17:49 UTC (1,578 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled An accurate flatness measure to estimate the generalization performance of CNN models, by Rahman Taleghani and 2 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.