RedacBench: Can AI Erase Your Secrets?

arXiv cs.AI / 2026/3/24

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper introduces RedacBench, a new benchmark for evaluating policy-conditioned redaction by language models across multiple domains and strategies.
  • RedacBench is built from 514 human-authored texts paired with 187 security policies, and uses 8,053 annotated propositions to assess all inferable information for each document.
  • The benchmark measures both security (removing policy-violating sensitive propositions) and utility (preserving non-sensitive propositions and overall semantics).
  • Experimental results across state-of-the-art language models suggest that stronger models can improve security, but maintaining utility remains difficult.
  • The authors release the dataset and a web-based playground to support dataset customization and further evaluation by future researchers.

Abstract

Modern language models can readily extract sensitive information from unstructured text, making redaction -- the selective removal of such information -- critical for data security. However, existing benchmarks for redaction typically focus on predefined categories of data such as personally identifiable information (PII) or evaluate specific techniques like masking. To address this limitation, we introduce RedacBench, a comprehensive benchmark for evaluating policy-conditioned redaction across domains and strategies. Constructed from 514 human-authored texts spanning individual, corporate, and government sources, paired with 187 security policies, RedacBench measures a model's ability to selectively remove policy-violating information while preserving the original semantics. We quantify performance using 8,053 annotated propositions that capture all inferable information in each text. This enables assessment of both security -- the removal of sensitive propositions -- and utility -- the preservation of non-sensitive propositions. Experiments across multiple redaction strategies and state-of-the-art language models show that while more advanced models can improve security, preserving utility remains a challenge. To facilitate future research, we release RedacBench along with a web-based playground for dataset customization and evaluation. Available at https://hyunjunian.github.io/redaction-playground/.