Learning to Recorrupt: Noise Distribution Agnostic Self-Supervised Image Denoising
arXiv stat.ML / 3/30/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- Self-supervised image denoising methods often avoid trivial identity mapping by using architectural constraints or loss functions that assume knowledge of the noise distribution.
- Prior “recorruption” approaches (e.g., Noisier2Noise, Recorrupted2Recorrupted) depend on knowing the noise distribution to synthesize effective noisy training pairs.
- The paper introduces Learning to Recorrupt (L2R), a noise distribution-agnostic approach that learns the recorruption process using a learnable monotonic neural network.
- L2R formulates training as a min-max saddle-point objective and removes the need for explicit noise-distribution knowledge.
- Experiments report state-of-the-art denoising performance across multiple challenging noise types, including heavy-tailed, spatially correlated, and signal-dependent Poisson-Gaussian noise.
Related Articles

Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer
Simon Willison's Blog
Beyond the Chatbot: Engineering Multi-Agent Ecosystems in 2026
Dev.to

I missed the "fun" part in software development
Dev.to

The Billion Dollar Tax on AI Agents
Dev.to

Hermes Agent: A Self-Improving AI Agent That Runs Anywhere
Dev.to