PREF-XAI: Preference-Based Personalized Rule Explanations of Black-Box Machine Learning Models

arXiv cs.LG / 4/22/2026

📰 NewsModels & Research

Key Points

  • The paper argues that XAI explanations should be tailored to individual users’ goals, preferences, and cognitive constraints, rather than using one-size-fits-all, model-centric approximations.
  • It introduces PREF-XAI, reframing explanation generation as a preference-driven selection problem where multiple candidate explanations are evaluated against user-specific criteria.
  • The proposed method generates rule-based explanation candidates and learns user preferences using formal preference learning, elicited via ranking and modeled with an additive utility function inferred through robust ordinal regression.
  • Experiments on real-world datasets indicate the approach can reconstruct user preferences from limited feedback, surface the most relevant explanations, and even discover explanation rules users did not initially consider.
  • By connecting XAI with preference learning, the work motivates more interactive and adaptive explanation systems that improve over time with user input.

Abstract

Explainable artificial intelligence (XAI) has predominantly focused on generating model-centric explanations that approximate the behavior of black-box models. However, such explanations often overlook a fundamental aspect of interpretability: different users require different explanations depending on their goals, preferences, and cognitive constraints. Although recent work has explored user-centric and personalized explanations, most existing approaches rely on heuristic adaptations or implicit user modeling, lacking a principled framework for representing and learning individual preferences. In this paper, we consider Preference-Based Explainable Artificial Intelligence (PREF-XAI), a novel perspective that reframes explanation as a preference-driven decision problem. Within PREF-XAI, explanations are not treated as fixed outputs, but as alternatives to be evaluated and selected according to user-specific criteria. In the PREF-XAI perspective, here we propose a methodology that combines rule-based explanations with formal preference learning. User preferences are elicited through a ranking of a small set of candidate explanations and modeled via an additive utility function inferred using robust ordinal regression. Experimental results on real-world datasets show that PREF-XAI can accurately reconstruct user preferences from limited feedback, identify highly relevant explanations, and discover novel explanatory rules not initially considered by the user. Beyond the proposed methodology, this work establishes a connection between XAI and preference learning, opening new directions for interactive and adaptive explanation systems.