Leveraging Avatar Fingerprinting: A Multi-Generator Photorealistic Talking-Head Public Database and Benchmark
arXiv cs.CV / 3/31/2026
📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- The paper introduces AVAPrintDB, a new public talking-head avatar dataset built with multiple state-of-the-art avatar generators to support avatar fingerprinting research in more realistic impersonation settings.
- AVAPrintDB includes both self- and cross-reenactment samples, aiming to simulate legitimate usage and identity impersonation scenarios that existing datasets do not cover well.
- The authors define a standardized, reproducible benchmark for avatar fingerprinting and evaluate public state-of-the-art methods as well as approaches using foundation models like DINOv2 and CLIP.
- Results indicate that identity-relevant motion cues can persist across synthetic avatars, but current fingerprinting systems are still highly sensitive to generator/synthesis pipeline changes and dataset/source shifts.
- The dataset, benchmark protocols, and fingerprinting systems are released publicly to enable reproducible research and better study of robustness under domain and generator shift.
Related Articles

Black Hat Asia
AI Business
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

Claude Code's Entire Source Code Was Just Leaked via npm Source Maps — Here's What's Inside
Dev.to

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to