Robust Explanations for User Trust in Enterprise NLP Systems

arXiv cs.CL / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how to evaluate whether token-level explanations for enterprise NLP remain robust and trustworthy when models are accessed only through black-box APIs, limiting typical representation-based explainer methods.
  • It proposes a unified black-box robustness evaluation framework using leave-one-out occlusion and quantifies stability via a “top-token flip rate” under realistic perturbations (swap, deletion, shuffling, and back-translation) across multiple severities.
  • Experiments on three benchmark datasets and six encoder/decoder models (BERT, RoBERTa, Qwen 7B/14B, Llama 8B/70B) over 64,800 cases show decoder LLMs have substantially more stable explanations than encoder baselines, with 73% lower flip rates on average.
  • The study finds explanation stability increases with model scale (about a 44% gain from 7B to 70B) and links robustness to inference cost, producing a cost–robustness tradeoff curve to guide model/explanation selection for compliance-sensitive deployments.

Abstract

Robust explanations are increasingly required for user trust in enterprise NLP, yet pre-deployment validation is difficult in the common case of black-box deployment (API-only access) where representation-based explainers are infeasible and existing studies provide limited guidance on whether explanations remain stable under real user noise, especially when organizations migrate from encoder classifiers to decoder LLMs. To close this gap, we propose a unified black-box robustness evaluation framework for token-level explanations based on leave-one-out occlusion, and operationalize explanation robustness with top-token flip rate under realistic perturbations (swap, deletion, shuffling, and back-translation) at multiple severity levels. Using this protocol, we conduct a systematic cross-architecture comparison across three benchmark datasets and six models spanning encoder and decoder families (BERT, RoBERTa, Qwen 7B/14B, Llama 8B/70B; 64,800 cases). We find that decoder LLMs produce substantially more stable explanations than encoder baselines (73% lower flip rates on average), and that stability improves with model scale (44% gain from 7B to 70B). Finally, we relate robustness improvements to inference cost, yielding a practical cost-robustness tradeoff curve that supports model and explanation selection prior to deployment in compliance-sensitive applications.