The Surprising Effectiveness of Noise Pretraining for Implicit Neural Representations

arXiv cs.CV / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates how parameter initialization affects implicit neural representations (INRs), focusing on why data-driven initialization can improve performance.
  • It tests noise pretraining across multiple noise types (e.g., Gaussian, Dead Leaves, Spectral) and evaluates outcomes for both fitting unseen signals and supporting an inverse imaging task (denoising).
  • The results show that pretraining on unstructured noise (Uniform/Gaussian) markedly boosts signal-fitting capacity versus other baselines, despite not producing strong denoising priors.
  • Pretraining on noise with a natural-image-like spectral structure (the classic 1/|f^alpha| form) provides a better trade-off, achieving denoising performance comparable to the best data-driven initialization methods.
  • The authors argue the spectral noise approach can enable more efficient INR training when domain-specific prior data is limited.

Abstract

The approximation and convergence properties of implicit neural representations (INRs) are known to be highly sensitive to parameter initialization strategies. While several data-driven initialization methods demonstrate significant improvements over standard random sampling, the reasons for their success -- specifically, whether they encode classical statistical signal priors or more complex features -- remain poorly understood. In this study, we explore this phenomenon through a series of experimental analyses leveraging noise pretraining. We pretrain INRs on diverse noise classes (e.g., Gaussian, Dead Leaves, Spectral) and measure their ability to both fit unseen signals and encode priors for an inverse imaging task (denoising). Our analyses on image and video data reveal a surprising finding: simply pretraining on unstructured noise (Uniform, Gaussian) dramatically improves signal fitting capacity compared to all other baselines. However, unstructured noise also yields poor deep image priors for denoising. In contrast, we also find that noise with the classic 1/|f^\alpha| spectral structure of natural images achieves an excellent balance of signal fitting and inverse imaging capabilities, performing on par with the best data-driven initialization methods. This finding enables more efficient INR training in applications lacking sufficient prior domain-specific data. For more details, visit project page at https://kushalvyas.github.io/noisepretraining.html