DPxFin: Adaptive Differential Privacy for Anti-Money Laundering Detection via Reputation-Weighted Federated Learning

arXiv cs.LG / 3/23/2026

📰 NewsModels & Research

Key Points

  • DPxFin introduces a reputation-guided adaptive differential privacy framework for privacy-preserving anti-money laundering detection via federated learning on tabular financial data.
  • The system computes client reputation by assessing the alignment between locally trained models and the global model, and allocates DP noise accordingly—lower noise for higher-reputation clients and higher noise for lower-reputation ones.
  • Evaluations on AML datasets under IID and non-IID settings using an MLP show a more favorable privacy-utility trade-off than traditional FL and fixed-noise DP baselines, even at modest scale.
  • DPxFin also withstands tabular data leakage attacks, supporting its practicality in real-world financial environments.

Abstract

In the modern financial system, combating money laundering is a critical challenge complicated by data privacy concerns and increasingly complex fraud transaction patterns. Although federated learning (FL) is a promising problem-solving approach as it allows institutions to train their models without sharing their data, it has the drawback of being prone to privacy leakage, specifically in tabular data forms like financial data. To address this, we propose DPxFin, a novel federated framework that integrates reputation-guided adaptive differential privacy. Our approach computes client reputation by evaluating the alignment between locally trained models and the global model. Based on this reputation, we dynamically assign differential privacy noise to client updates, enhancing privacy while maintaining overall model utility. Clients with higher reputations receive lower noise to amplify their trustworthy contributions, while low-reputation clients are allocated stronger noise to mitigate risk. We validate DPxFin on the Anti-Money Laundering (AML) dataset under both IID and non-IID settings using Multi Layer Perceptron (MLP). Experimental analysis established that our approach has a more desirable trade-off between accuracy and privacy than those of traditional FL and fixed-noise Differential Privacy (DP) baselines, where performance improvements were consistent, even though on a modest scale. Moreover, DPxFin does withstand tabular data leakage attacks, proving its effectiveness under real-world financial conditions.