Training-inference input alignment outweighs framework choice in longitudinal retinal image prediction

arXiv cs.CV / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study investigates longitudinal retinal image prediction for progressive macular disease, focusing on whether generative modeling complexity is necessary or whether input alignment matters more.
  • A controlled comparison across five conditioning/training–inference configurations using the same architecture and dataset shows that aligning training and inference input distributions yields large improvements in SSIM metrics (delta-SSIM +0.082, SSIM +0.086, p < 0.001).
  • The specific choice among aligned framework variants did not significantly change primary evaluation metrics, suggesting that “input distribution alignment” is the dominant driver.
  • Mechanistic analyses indicate that time-invariant acquisition variability dominates inter-visit changes, limiting the benefit of stochastic sampling width and explaining why simpler aligned approaches work well.
  • Guided by these insights, the authors propose TRU (Temporal Retinal U-Net), a deterministic time-delta conditioned regression model that performs at or above multiple state-of-the-art benchmarks across 28,902 eyes from multiple imaging platforms and tasks, with gains increasing as history length grows.

Abstract

Quantitative prediction of future retinal appearance from longitudinal imaging would support clinical decisions in progressive macular disease that currently rely on qualitative comparison or scalar progression scores. Recent methods have moved toward increasing generative complexity, but whether this complexity is necessary for slowly progressing retinal disease is unclear. We tested this through a controlled comparison of five conditioning configurations sharing one architecture and training dataset, spanning standard conditional diffusion, inference-aligned stochastic training, and deterministic regression. In our evaluation, aligning the training and inference input distributions produced large gains (delta-SSIM +0.082, SSIM +0.086, both p < 0.001), while the choice among aligned frameworks did not significantly affect any primary metric. Task-entropy and posterior-concentration analyses, replicated on two fundus autofluorescence (FAF) platforms, provided a mechanistic account: the predictable component of inter-visit change is small relative to time-invariant acquisition variability, leaving stochastic sampling with little width to exploit. Guided by these findings, we developed TRU (Temporal Retinal U-Net), a deterministic direct-regression model with continuous time-delta conditioning and multi-scale history aggregation. We evaluated TRU on 28,902 eyes across three imaging platforms: a mixed-disease Optos FAF cohort (9,942 eyes), zero-shot transfer to Stargardt macular dystrophy on Optos (288 eyes) and Heidelberg Spectralis (125 eyes), and a boundary evaluation on Cirrus en-face fundus images from a glaucoma cohort (18,547 eyes). TRU matched or exceeded delta-SSIM, SSIM, and PSNR in every FAF cohort against three state-of-the-art benchmarks, and its advantage grew monotonically with available history length.