Interpretable Operator Learning for Inverse Problems via Adaptive Spectral Filtering: Convergence and Discretization Invariance

arXiv stat.ML / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes SC-Net (Spectral Correction Network), an interpretable operator-learning framework for ill-posed inverse problems that learns a pointwise adaptive spectral filter based on the signal-to-noise ratio.
  • The authors provide a theoretical justification that SC-Net approximates the continuous inverse operator while guaranteeing discretization invariance, addressing a common weakness of many deep-learning inverse methods.
  • In numerical experiments on 1D integral equations, SC-Net attains the minimax optimal convergence rate of O(δ^0.5) for s=p=1.5 and matches known lower bounds.
  • SC-Net learns sharp-cutoff-like filters that are shown to outperform Oracle Tikhonov regularization and achieves zero-shot super-resolution with stable reconstruction errors (~0.23) when transferring from coarse to much finer grids.

Abstract

Solving ill-posed inverse problems necessitates effective regularization strategies to stabilize the inversion process against measurement noise. While classical methods like Tikhonov regularization require heuristic parameter tuning, and standard deep learning approaches often lack interpretability and generalization across resolutions, we propose SC-Net (Spectral Correction Network), a novel operator learning framework. SC-Net operates in the spectral domain of the forward operator, learning a pointwise adaptive filter function that reweights spectral coefficients based on the signal-to-noise ratio. We provide a theoretical analysis showing that SC-Net approximates the continuous inverse operator, guaranteeing discretization invariance. Numerical experiments on 1D integral equations demonstrate that SC-Net: (1) achieves the theoretical minimax optimal convergence rate (O(\delta^{0.5}) for s=p=1.5), matching theoretical lower bounds; (2) learns interpretable sharp-cutoff filters that outperform Oracle Tikhonov regularization; and (3) exhibits zero-shot super-resolution, maintaining stable reconstruction errors (\approx 0.23) when trained on coarse grids (N=256) and tested on significantly finer grids (up to N=2048). The proposed method bridges the gap between rigorous regularization theory and data-driven operator learning.