Reinforcement-Guided Synthetic Data Generation for Privacy-Sensitive Identity Recognition

arXiv cs.CV / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a reinforcement-guided synthetic data generation framework to help privacy-sensitive identity recognition tasks where real data access is limited by regulation and copyright constraints.
  • It uses a cold-start adaptation step to align a pretrained general-domain generative model with the target identity-recognition domain to improve semantic relevance and initial sample fidelity.
  • The method introduces a multi-objective reinforcement reward that balances semantic consistency, coverage diversity, and expression richness to produce realistic yet task-effective synthetic identities.
  • For downstream training, it adds dynamic sample selection to prioritize high-utility synthetic samples, enabling adaptive data scaling and better domain alignment in small-data regimes.
  • Experiments on benchmark datasets indicate improvements in both generation fidelity and classification accuracy, with strong generalization to new categories under limited data.

Abstract

High-fidelity generative models are increasingly needed in privacy-sensitive scenarios, where access to data is severely restricted due to regulatory and copyright constraints. This scarcity hampers model development--ironically, in settings where generative models are most needed to compensate for the lack of data. This creates a self-reinforcing challenge: limited data leads to poor generative models, which in turn fail to mitigate data scarcity. To break this cycle, we propose a reinforcement-guided synthetic data generation framework that adapts general-domain generative priors to privacy-sensitive identity recognition tasks. We first perform a cold-start adaptation to align a pretrained generator with the target domain, establishing semantic relevance and initial fidelity. Building on this foundation, we introduce a multi-objective reward that jointly optimizes semantic consistency, coverage diversity, and expression richness, guiding the generator to produce both realistic and task-effective samples. During downstream training, a dynamic sample selection mechanism further prioritizes high-utility synthetic samples, enabling adaptive data scaling and improved domain alignment. Extensive experiments on benchmark datasets demonstrate that our framework significantly improves both generation fidelity and classification accuracy, while also exhibiting strong generalization to novel categories in small-data regimes.