AI Navigate

Adaptive regularization parameter selection for high-dimensional inverse problems: A Bayesian approach with Tucker low-rank constraints

arXiv cs.LG / 3/18/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a novel variational Bayesian method that uses Tucker decomposition to reduce dimensionality and make high-dimensional inverse problems computationally tractable by performing inference in a core tensor space.
  • It introduces per-mode precision parameters for adaptive regularization that capture anisotropic structures, enabling targeted denoising in directions aligned with physical anisotropy (e.g., row vs. column directions in image deblurring).
  • Noise levels are estimated from data rather than relying on prior noise information, and the method outperforms benchmarks such as L-curve, GCV, UPRE, and discrepancy principle in PSNR/SSIM across 2D deblurring, 3D heat conduction, and Fredholm equations.
  • The approach scales to problems with about 110,000 variables, with reported gains like 0.73-2.09 dB in deblurring and 6.75 dB in 3D heat conduction, while noting limitations in Tucker rank sensitivity and the need for theoretical guarantees.
  • The work bridges Bayesian theory and scalable computation with practical implications for imaging, remote sensing, and scientific computing, and outlines future directions for automated rank selection and theoretical analysis.

Abstract

This paper introduces a novel variational Bayesian method that integrates Tucker decomposition for efficient high-dimensional inverse problem solving. The method reduces computational complexity by transforming variational inference from a high-dimensional space to a lower-dimensional core tensor space via Tucker decomposition. A key innovation is the introduction of per-mode precision parameters, enabling adaptive regularization for anisotropic structures. For instance, in directional image deblurring, learned parameters align with physical anisotropy, applying stronger regularization to critical directions (e.g., row vs. column axes). The method further estimates noise levels from data, eliminating reliance on prior knowledge of noise parameters (unlike conventional benchmarks such as the discrepancy principle (DP)). Experimental evaluations across 2D deblurring, 3D heat conduction, and Fredholm integral equations demonstrate consistent improvements in quantitative metrics (PSNR, SSIM) and qualitative visualizations (error maps, precision parameter trends) compared to L-curve criterion, generalized cross-validation (GCV), unbiased predictive risk estimator (UPRE), and DP. The approach scales to problems with 110,000 variables and outperforms existing methods by 0.73-2.09 dB in deblurring tasks and 6.75 dB in 3D heat conduction. Limitations include sensitivity to rank selection in Tucker decomposition and the need for theoretical analysis. Future work will explore automated rank selection and theoretical guarantees. This method bridges Bayesian theory and scalable computation, offering practical solutions for large-scale inverse problems in imaging, remote sensing, and scientific computing.