A Scale-Adaptive Framework for Joint Spatiotemporal Super-Resolution with Diffusion Models

arXiv cs.LG / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses limitations in climate video super-resolution models that are typically built for a fixed pair of spatial and temporal upscaling factors, reducing transferability across different resolutions and frame rates.
  • It proposes a scale-adaptive spatiotemporal super-resolution framework that reuses the same model architecture across factors by combining attention-based conditional mean prediction with a residual conditional diffusion model.
  • The method includes an optional mass-conservation transform to preserve aggregated precipitation totals between input and output sequences.
  • Scale adaptivity is achieved by retuning a small set of factor-dependent hyperparameters—diffusion noise schedule amplitude (beta), temporal context length (L), and optionally a mass-conservation function—rather than retraining a new architecture for each factor.
  • Experiments on France precipitation reanalysis data (Comephore) show the architecture can handle spatial SR factors from 1 to 25 and temporal factors from 1 to 6 within a single reusable tuning recipe.

Abstract

Deep-learning video super-resolution has progressed rapidly, but climate applications typically super-resolve (increase resolution) either space or time, and joint spatiotemporal models are often designed for a single pair of super-resolution (SR) factors (upscaling spatial and temporal ratio between the low-resolution sequence and the high-resolution sequence), limiting transfer across spatial resolutions and temporal cadences (frame rates). We present a scale-adaptive framework that reuses the same architecture across factors by decomposing spatiotemporal SR into a deterministic prediction of the conditional mean, with attention, and a residual conditional diffusion model, with an optional mass-conservation (same precipitation amount in inputs and outputs) transform to preserve aggregated totals. Assuming that larger SR factors primarily increase underdetermination (hence required context and residual uncertainty) rather than changing the conditional-mean structure, scale adaptivity is achieved by retuning three factor-dependent hyperparameters before retraining: the diffusion noise schedule amplitude beta (larger for larger factors to increase diversity), the temporal context length L (set to maintain comparable attention horizons across cadences) and optionally a third, the mass-conservation function f (tapered to limit the amplification of extremes for large factors). Demonstrated on reanalysis precipitation over France (Comephore), the same architecture spans super-resolution factors from 1 to 25 in space and 1 to 6 in time, yielding a reusable architecture and tuning recipe for joint spatiotemporal super-resolution across scales.