From XAI to MLOps: Explainable Concept Drift Detection with Profile Drift Detection
arXiv stat.ML / 4/7/2026
💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- Predictive models can lose accuracy as data distributions evolve (data drift), and concept drift is especially hard to detect because the underlying relationship changes rather than just marginal inputs.
- The paper introduces Profile Drift Detection (PDD), a concept-drift detection method that uses explainable AI artifacts—specifically Partial Dependence Profiles (PDPs)—to capture meaningful changes in model behavior.
- PDD defines new, computationally efficient drift metrics that quantify how PDPs change over the data stream, improving sensitivity to conceptual shifts compared with traditional accuracy- or marginal-distribution-based signals.
- Experiments on synthetic and real-world datasets show PDD outperforms existing approaches while balancing sensitivity versus stability of drift alerts and maintaining high predictive performance.
- The authors position PDD as compatible with MLOps workflows, supporting continuous monitoring and adaptive retraining for real-time deployments in dynamic environments.




