Shapley Value-Guided Adaptive Ensemble Learning for Explainable Financial Fraud Detection with U.S. Regulatory Compliance Validation
arXiv cs.LG / 4/17/2026
📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses a key barrier to deploying AI fraud-detection in the U.S.: black-box explanations that fail to meet auditability requirements under regulations such as OCC Bulletin 2011-12 and Federal Reserve SR 11-7.
- It evaluates explanation quality using faithfulness (sufficiency and comprehensiveness at k=5/10/15) and stability (Kendall’s W over 30 bootstrap samples), finding that XGBoost with TreeExplainer provides near-perfect stability (W=0.9912) while LSTM with DeepExplainer is much weaker (W=0.4962).
- It proposes the SHAP-Guided Adaptive Ensemble (SGAE), which adaptively sets per-transaction ensemble weights based on SHAP attribution agreement, achieving the best predictive performance with AUC-ROC of 0.8837 on held-out data and 0.9245 under cross-validation.
- Using the full 590,540-transaction IEEE-CIS dataset, the study compares LSTM, Transformer, and GNN-GraphSAGE and reports GNN-GraphSAGE as strongest among architectures (AUC-ROC 0.9248, F1=0.6013).
- The authors directly map their explanation and validation results to U.S. regulatory compliance needs across OCC, SR 11-7, and BSA-AML frameworks.

![[Patterns] AI Agent Error Handling That Actually Works](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Frn5czaopq2vzo7cglady.png&w=3840&q=75)

