On the Sample Complexity of Learning for Blind Inverse Problems

arXiv stat.ML / 4/21/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies learning guarantees for blind inverse problems, where both the signal and the forward operator are partially unknown and standard non-blind methods fail due to identifiability and symmetry issues.
  • It analyzes the problem through the framework of Linear Minimum Mean Square Estimators (LMMSEs), deriving closed-form optimal estimators and showing equivalences to Tikhonov-regularized formulations whose structure depends on the assumed distributions of the signal, noise, and random operator.
  • The authors prove convergence behavior for the reconstruction error as noise and operator randomness decrease under a source condition assumption.
  • They derive finite-sample error bounds that quantify how performance depends on noise level, problem conditioning, number of samples, and explicitly on the operator randomness, and confirm these predictions with numerical experiments.
  • The work aims to improve reliability of data-driven blind approaches by supplying theoretical, interpretable learning characterizations rather than relying only on empirical performance.

Abstract

Blind inverse problems arise in many experimental settings where both the signal of interest and the forward operator are (partially) unknown. In this context, methods developed for the non-blind case cannot be adapted in a straightforward manner due to identifiability issues and symmetric solutions inherent to the blind setting. Recently, data-driven approaches have been proposed to address such problems, demonstrating strong empirical performance and adaptability. However, these methods often lack interpretability and are not supported by theoretical guarantees, limiting their reliability in domains such as applied imaging where a blind approach often relates to a calibration of the acquisition device. In this work, we shed light on learning in blind inverse problems within the insightful framework of Linear Minimum Mean Square Estimators (LMMSEs). We provide a theoretical analysis, deriving closed-form expressions for optimal estimators and extending classical recovery results to the blind setting. In particular, we establish equivalences with tailored Tikhonov-regularized formulations, where the regularization structure depends explicitly on the distributions of the unknown signal, of the noise, and of the random forward operator. We also show how the reconstruction error converges as the noise and the randomness of the operator diminish when we use a source condition assumption. Furthermore, we derive finite-sample error bounds that characterize the performance of the learned estimators as a function of the noise level, problem conditioning, and number of available samples. These bounds explicitly quantify the impact of operator randomness and show explicitly the dependence of the associated convergence rates to this randomness factors. Finally, we validate our theoretical findings through illustrative exemplar numerical experiments that confirm the predicted convergence behavior.