Explicit integral representations and quantitative bounds for two-layer ReLU networks

arXiv stat.ML / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a method to build explicit integral representations for two-layer ReLU networks that can represent any multivariate polynomial in a relatively straightforward way.
  • It introduces a sharpened ReLU integral representation based on a harmonic extension and a projection mechanism.
  • The authors provide quantitative error bounds for approximations in the L^{2}(D) norm using this representation.
  • The main result is that the approximation error bounds do not explicitly depend on the input dimension or the polynomial degree, instead relying on monomial coefficients and the choice of distribution D.

Abstract

An approach to construct explicit integral representations for two-layer ReLU networks is presented, which provides relatively simple representations for any multivariate polynomial. Quantitative bounds are provided for a particular, sharpened ReLU integral representation, which involves a harmonic extension and a projection. The bounds demonstrate that functions can be approximated with L^{2}(\mathcal{D}) errors that do not depend explicitly on dimension or degree, but rather the coefficients of their monomial expansions and the distribution \mathcal{D}.