AI Navigate

RTD-Guard: A Black-Box Textual Adversarial Detection Framework via Replacement Token Detection

arXiv cs.CL / 3/16/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • RTD-Guard is a black-box framework for detecting textual adversarial examples that leverages a pre-trained Replaced Token Detection (RTD) discriminator to identify substituted tokens without fine-tuning.
  • It localizes suspicious tokens, masks them, and detects adversarial examples by observing the prediction confidence shift of the victim model before and after intervention, using only two black-box queries.
  • The approach requires no adversarial data, model tuning, or internal model access, making it practical for deployment in privacy-sensitive or resource-constrained environments.
  • Comprehensive experiments on multiple benchmark datasets show RTD-Guard surpasses existing detection baselines across multiple metrics, demonstrating its efficiency and practicality.

Abstract

Textual adversarial attacks pose a serious security threat to Natural Language Processing (NLP) systems by introducing imperceptible perturbations that mislead deep learning models. While adversarial example detection offers a lightweight alternative to robust training, existing methods typically rely on prior knowledge of attacks, white-box access to the victim model, or numerous queries, which severely limits their practical deployment. This paper introduces RTD-Guard, a novel black-box framework for detecting textual adversarial examples. Our key insight is that word-substitution perturbations in adversarial attacks closely resemble the "replaced tokens" that a Replaced Token Detection (RTD) discriminator is pre-trained to identify. Leveraging this, RTD-Guard employs an off-the-shelf RTD discriminator-without fine-tuning-to localize suspicious tokens, masks them, and detects adversarial examples by observing the prediction confidence shift of the victim model before and after intervention. The entire process requires no adversarial data, model tuning, or internal model access, and uses only two black-box queries. Comprehensive experiments on multiple benchmark datasets demonstrate that RTD-Guard effectively detects adversarial texts generated by diverse state-of-the-art attack methods. It surpasses existing detection baselines across multiple metrics, offering a highly efficient, practical, and resource-light defense mechanism-particularly suited for real-world deployment in resource-constrained or privacy-sensitive environments.