Adaptive regularization parameter selection for high-dimensional inverse problems: A Bayesian approach with Tucker low-rank constraints
arXiv cs.LG / 3/18/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes a novel variational Bayesian method that uses Tucker decomposition to reduce dimensionality and make high-dimensional inverse problems computationally tractable by performing inference in a core tensor space.
- It introduces per-mode precision parameters for adaptive regularization that capture anisotropic structures, enabling targeted denoising in directions aligned with physical anisotropy (e.g., row vs. column directions in image deblurring).
- Noise levels are estimated from data rather than relying on prior noise information, and the method outperforms benchmarks such as L-curve, GCV, UPRE, and discrepancy principle in PSNR/SSIM across 2D deblurring, 3D heat conduction, and Fredholm equations.
- The approach scales to problems with about 110,000 variables, with reported gains like 0.73-2.09 dB in deblurring and 6.75 dB in 3D heat conduction, while noting limitations in Tucker rank sensitivity and the need for theoretical guarantees.
- The work bridges Bayesian theory and scalable computation with practical implications for imaging, remote sensing, and scientific computing, and outlines future directions for automated rank selection and theoretical analysis.
Related Articles
Day 10: 230 Sessions of Hustle and It Comes Down to One Person Reading a Document
Dev.to

5 Dangerous Lies Behind Viral AI Coding Demos That Break in Production
Dev.to
Two bots, one confused server: what Nimbus revealed about AI agent identity
Dev.to

OpenTelemetry just standardized LLM tracing. Here's what it actually looks like in code.
Dev.to
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark forFinance
Dev.to