Retrieval Augmented Classification for Confidential Documents

arXiv cs.AI / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Retrieval Augmented Classification (RAC) for classifying confidential documents while minimizing leakage by grounding decisions in an external retrieval/vector store rather than updating model weights with sensitive content.
  • In experiments on the WikiLeaks US Diplomacy corpus under realistic sequence-length constraints, RAC matches supervised fine-tuning (FT) on balanced data but is more stable on unbalanced data.
  • The reported results show RAC achieving about 96% accuracy on both the original unbalanced and augmented balanced sets, and up to 94% F1 with proper prompting, while FT shows weaker generalization across class imbalance settings.
  • RAC is positioned as more practical for governed deployments because it can be updated via reindexing to incorporate new data without retraining, and it is designed to remain robust as class balance, context length, and governance requirements change.
  • The authors contribute a RAC classification pipeline and evaluation recipe, an experimental study isolating class imbalance and context-length effects, and design guidance for RAC in security-preserving, controlled environments.

Abstract

Unauthorized disclosure of confidential documents demands robust, low-leakage classification. In real work environments, there is a lot of inflow and outflow of documents. To continuously update knowledge, we propose a methodology for classifying confidential documents using Retrieval Augmented Classification (RAC). To confirm this effectiveness, we compare RAC and supervised fine tuning (FT) on the WikiLeaks US Diplomacy corpus under realistic sequence-length constraints. On balanced data, RAC matches FT. On unbalanced data, RAC is more stable while delivering comparable performance--about 96% Accuracy on both the original (unbalanced) and augmented (balanced) sets, and up to 94% F1 with proper prompting--whereas FT attains 90% F1 trained on the augmented, balanced set but drops to 88% F1 trained on the original, unbalanced set. When robust augmentation is infeasible, RAC provides a practical, security-preserving path to strong classification by keeping sensitive content out of model weights and under your control, and it remains robust as real-world conditions change in class balance, data, context length, or governance requirements. Because RAC grounds decisions in an external vector store with similarity matching, it is less sensitive to label skew, reduces parameter-level leakage, and can incorporate new data immediately via reindexing--a difficult step for FT, which typically requires retraining. The contributions of this paper are threefold: first, a RAC-based classification pipeline and evaluation recipe; second, a controlled study that isolates class imbalance and context-length effects for FT versus RAC in confidential-document grading; and third, actionable guidance on RAC design patterns for governed deployments.