Geometry Preserving Loss Functions Promote Improved Adaptation of Blackbox Generative Model

arXiv cs.LG / 4/28/2026

📰 NewsModels & Research

Key Points

  • The paper addresses the challenge of adapting black-box generative models to specific domains when model weights and gradients are not accessible and full fine-tuning is too costly.
  • It proposes an end-to-end domain adaptation pipeline that uses geometry-preserving loss functions together with pre-trained GANs.
  • By re-framing GAN inversion for more accurate latent space representations, the method extends existing state-of-the-art inverters to better match target distributions.
  • The approach is designed to preserve pair-wise distances between tangent spaces, enabling training of a latent generative model that generates samples from the target distribution.
  • Experiments on StyleGANs under real distribution shifts show that adding the geometry-preserving loss improves adaptation quality versus traditional loss functions.

Abstract

Adaptation of blackbox generative models has been widely studied recently through the exploration of several methods including generator fine-tuning, latent space searches, leveraging singular value decomposition, and so on. However, adapting large-scale generative AI tools to specific use cases continues to be challenging, as many of these industry-grade models are not made widely available. The traditional approach of fine-tuning certain layers of a generative network is not feasible due to the expense of storing and fine-tuning generative models, as well as the restricted access to weights and gradients. Recognizing these challenges, we propose a novel end-to-end pipeline aimed at domain adaptation by leveraging geometry-preserving loss functions in conjunction to pre-trained generative adversarial networks (GANs). Our method rethinks the problem of adaptation by re-contextualizing the role of GAN inversion in obtaining accurate latent space representations. Extending the ability of existing state-of-the-art inverters, we preserve pair-wise distances between tangent spaces to successfully train a latent generative model to produce samples from the target distribution. We evaluate our proposed pipeline on StyleGANs with real distribution shifts and demonstrate that the introduction of the geometry preserving loss function lends to improved adaptation of generative models compared to other traditional loss functions.