AI Navigate

An accurate flatness measure to estimate the generalization performance of CNN models

arXiv cs.LG / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper introduces an exact and architecture-aware flatness measure designed specifically for convolutional neural networks (CNNs), addressing limitations of prior measures that were either designed for fully connected networks or ignored CNN-specific geometric structures.
  • A closed-form expression for the Hessian trace of the cross-entropy loss with respect to convolutional kernels is derived for CNNs using global average pooling followed by a linear classifier.
  • The authors propose a parameterization-aware relative flatness measure that accounts for scaling symmetries and filter interactions unique to convolution and pooling operations.
  • Empirical evaluations on standard image classification benchmarks show that the proposed flatness measure effectively assesses and compares the generalization performance of CNN models.
  • This measure can guide both architectural design and training choices to improve the generalization abilities of CNNs in practice.

Computer Science > Machine Learning

arXiv:2603.09016 (cs)
[Submitted on 9 Mar 2026]

Title:An accurate flatness measure to estimate the generalization performance of CNN models

View a PDF of the paper titled An accurate flatness measure to estimate the generalization performance of CNN models, by Rahman Taleghani and 2 other authors
View PDF HTML (experimental)
Abstract:Flatness measures based on the spectrum or the trace of the Hessian of the loss are widely used as proxies for the generalization ability of deep networks. However, most existing definitions are either tailored to fully connected architectures, relying on stochastic estimators of the Hessian trace, or ignore the specific geometric structure of modern Convolutional Neural Networks (CNNs). In this work, we develop a flatness measure that is both exact and architecturally faithful for a broad and practically relevant class of CNNs. We first derive a closed-form expression for the trace of the Hessian of the cross-entropy loss with respect to convolutional kernels in networks that use global average pooling followed by a linear classifier. Building on this result, we then specialize the notion of relative flatness to convolutional layers and obtain a parameterization-aware flatness measure that properly accounts for the scaling symmetries and filter interactions induced by convolution and pooling. Finally, we empirically investigate the proposed measure on families of CNNs trained on standard image-classification benchmarks. The results obtained suggest that the proposed measure can serve as a robust tool to assess and compare the generalization performance of CNN models, and to guide the design of architecture and training choices in practice.
Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Neural and Evolutionary Computing (cs.NE)
MSC classes: 68T07, 62M45, 65F30, 68T05, 49Q12
Cite as: arXiv:2603.09016 [cs.LG]
  (or arXiv:2603.09016v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09016
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Rahman Taleghani [view email]
[v1] Mon, 9 Mar 2026 23:17:49 UTC (1,578 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled An accurate flatness measure to estimate the generalization performance of CNN models, by Rahman Taleghani and 2 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.