Few-Shot Generative Model Adaption via Identity Injection and Preservation

arXiv cs.CV / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses few-shot generative model adaptation (under 10 samples) and the risk that fine-tuning causes mode collapse and source-domain “identity” forgetting, which harms target-domain image quality.
  • It proposes Identity Injection and Preservation (I$^2$P), using an identity injection module to transfer source identity knowledge into the target domain latent space so generated images keep key identity attributes.
  • I$^2$P also includes an identity substitution design (a style-content decoupler plus a reconstruction modulator) to further preserve identity information during adaptation.
  • The method enforces identity consistency via feature alignment/consistency constraints, and reports substantial improvements over state of the art across multiple public datasets and five evaluation metrics.

Abstract

Training generative models with limited data presents severe challenges of mode collapse. A common approach is to adapt a large pretrained generative model upon a target domain with very few samples (fewer than 10), known as few-shot generative model adaptation. However, existing methods often suffer from forgetting source domain identity knowledge during adaptation, which degrades the quality of generated images in the target domain. To address this, we propose Identity Injection and Preservation (I^2P), which leverages identity injection and consistency alignment to preserve the source identity knowledge. Specifically, we first introduce an identity injection module that integrates source domain identity knowledge into the target domain's latent space, ensuring the generated images retain key identity knowledge of the source domain. Second, we design an identity substitution module, which includes a style-content decoupler and a reconstruction modulator, to further enhance source domain identity preservation. We enforce identity consistency constraints by aligning features from identity substitution, thereby preserving identity knowledge. Both quantitative and qualitative experiments show that our method achieves substantial improvements over state-of-the-art methods on multiple public datasets and 5 metrics.