Frames2Residual: Spatiotemporal Decoupling for Self-Supervised Video Denoising
arXiv cs.CV / 3/12/2026
💬 OpinionModels & Research
Key Points
- The paper introduces Frames2Residual (F2R), a self-supervised video denoising framework that decouples spatiotemporal training into two stages: blind temporal consistency modeling and non-blind spatial texture recovery.
- Stage 1 uses a frame-wise blind temporal estimator to learn inter-frame consistency and produce a temporally stable anchor without relying on center-pixel masking.
- Stage 2 employs a non-blind spatial refiner that uses the temporal anchor to safely reintroduce the center frame and recover high-frequency spatial residuals while preserving temporal stability.
- Experimental results show that this decoupled approach enables F2R to outperform existing self-supervised methods on both sRGB and raw video benchmarks, highlighting the effectiveness of spatiotemporal decoupling in video denoising.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA