AI Navigate

Geometric Autoencoder for Diffusion Models

arXiv cs.CV / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The Geometric Autoencoder (GAE) is introduced to improve latent diffusion models by jointly addressing semantic discriminability, reconstruction fidelity, and latent compactness.
  • GAE constructs an optimized, low-dimensional semantic supervision target from Vision Foundation Model priors to guide the autoencoder and align latent representations with meaningful semantics.
  • Latent normalization replaces the KL-divergence in standard VAEs, enabling a more stable latent manifold tailored for diffusion learning.
  • A dynamic noise sampling mechanism is incorporated to improve robust reconstruction under high-intensity noise.
  • Empirical results on ImageNet-1K (256x256) show gFID scores of 1.82 at 80 epochs and 1.31 at 800 epochs without Classifier-Free Guidance, with code and models publicly released at the linked GitHub repository: https://github.com/freezing-index/Geometric-Autoencoder-for-Diffusion-Models.

Abstract

Latent diffusion models have established a new state-of-the-art in high-resolution visual generation. Integrating Vision Foundation Model priors improves generative efficiency, yet existing latent designs remain largely heuristic. These approaches often struggle to unify semantic discriminability, reconstruction fidelity, and latent compactness. In this paper, we propose Geometric Autoencoder (GAE), a principled framework that systematically addresses these challenges. By analyzing various alignment paradigms, GAE constructs an optimized low-dimensional semantic supervision target from VFMs to provide guidance for the autoencoder. Furthermore, we leverage latent normalization that replaces the restrictive KL-divergence of standard VAEs, enabling a more stable latent manifold specifically optimized for diffusion learning. To ensure robust reconstruction under high-intensity noise, GAE incorporates a dynamic noise sampling mechanism. Empirically, GAE achieves compelling performance on the ImageNet-1K 256 \times 256 benchmark, reaching a gFID of 1.82 at only 80 epochs and 1.31 at 800 epochs without Classifier-Free Guidance, significantly surpassing existing state-of-the-art methods. Beyond generative quality, GAE establishes a superior equilibrium between compression, semantic depth and robust reconstruction stability. These results validate our design considerations, offering a promising paradigm for latent diffusion modeling. Code and models are publicly available at https://github.com/freezing-index/Geometric-Autoencoder-for-Diffusion-Models.