Learning Affine-Equivariant Proximal Operators

arXiv cs.LG / 4/20/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper proposes Affine-Equivariant Learned Proximal Networks (AE-LPNs), neural-network parametrizations that provably compute exact proximal operators.
  • AE-LPNs extend Learned Proximal Networks (LPNs) by enforcing shift and scaling (affine) equivariance in the learned regularizers and their corresponding proximals.
  • The authors validate the approach first on synthetic examples with constructive demonstrations, then on real denoising tasks under out-of-distribution conditions.
  • The resulting equivariant learned proximals improve robustness to noise distribution changes and affine shifts beyond what the model saw during training.

Abstract

Proximal operators are fundamental across many applications in signal processing and machine learning, including solving ill-posed inverse problems. Recent work has introduced Learned Proximal Networks (LPNs), providing parametric functions that compute exact proximals for data-driven and potentially non-convex regularizers. However, in many settings it is important to include additional structure to these regularizers--and their corresponding proximals--such as shift and scale equivariance. In this work, we show how to obtain learned functions parametrized by neural networks that provably compute exact proximal operators while being equivariant to shifts and scaling, which we dub Affine-Equivariant Learned Proximal Networks (AE-LPNs). We demonstrate our results on synthetic, constructive examples, and then on real data via denoising in out-of-distribution settings. Our equivariant learned proximals enhance robustness to noise distributions and affine shifts far beyond training distributions, improving the practical utility of learned proximal operators