AI Navigate

DAST: A Dual-Stream Voice Anonymization Attacker with Staged Training

arXiv cs.AI / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • DAST is a dual-stream attacker that fuses spectral and self-supervised learning features via parallel encoders to evaluate privacy risks in voice anonymization.
  • It introduces a three-stage training strategy: Stage I builds foundation speaker-discriminative representations, Stage II leverages shared identity-transformation traits of voice conversion and anonymization to train robustness against diverse converted speech, and Stage III provides lightweight adaptation to target anonymized data.
  • Experiments on the VoicePrivacy Attacker Challenge (VPAC) dataset show that Stage II is the primary driver of generalization, enabling strong attacking performance on unseen anonymization datasets, and Stage III with only 10% of target data surpasses current state-of-the-art attackers in terms of equal error rate (EER).
  • The work highlights privacy evaluation challenges for voice anonymization and informs the design of more robust anonymization systems and evaluation protocols.

Abstract

Voice anonymization masks vocal traits while preserving linguistic content, which may still leak speaker-specific patterns. To assess and strengthen privacy evaluation, we propose a dual-stream attacker that fuses spectral and self-supervised learning features via parallel encoders with a three-stage training strategy. Stage I establishes foundational speaker-discriminative representations. Stage II leverages the shared identity-transformation characteristics of voice conversion and anonymization, exposing the model to diverse converted speech to build cross-system robustness. Stage III provides lightweight adaptation to target anonymized data. Results on the VoicePrivacy Attacker Challenge (VPAC) dataset demonstrate that Stage II is the primary driver of generalization, enabling strong attacking performance on unseen anonymization datasets. With Stage III, fine-tuning on only 10\% of the target anonymization dataset surpasses current state-of-the-art attackers in terms of EER.