AI Navigate

CLASP: Defending Hybrid Large Language Models Against Hidden State Poisoning Attacks

arXiv cs.CL / 3/13/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • CLASP defends hybrid SSM-based LLMs against Hidden State Poisoning Attacks by framing the mitigation as a token-level binary classification problem using an XGBoost classifier on block output embeddings.
  • It achieves high detection performance in a realistic resume-scanning scenario: 95.9% token-level F1 and 99.3% document-level F1 on malicious tokens, with strong generalization to unseen attack patterns (96.9% doc-level F1 in leave-one-out; 91.6% doc-level F1 under structurally novel triggers).
  • CLASP operates with modest resources—about 1,032 tokens/second and under 4 GB VRAM—making it a lightweight front-line defense that is independent of downstream models.
  • The paper provides code and detailed results at the linked URL, illustrating a practical defense technique for SSM-based and hybrid architectures.

Abstract

State space models (SSMs) like Mamba have gained significant traction as efficient alternatives to Transformers, achieving linear complexity while maintaining competitive performance. However, Hidden State Poisoning Attacks (HiSPAs), a recently discovered vulnerability that corrupts SSM memory through adversarial strings, pose a critical threat to these architectures and their hybrid variants. Framing the HiSPA mitigation task as a binary classification problem at the token level, we introduce the CLASP model to defend against this threat. CLASP exploits distinct patterns in Mamba's block output embeddings (BOEs) and uses an XGBoost classifier to identify malicious tokens with minimal computational overhead. We consider a realistic scenario in which both SSMs and HiSPAs are likely to be used: an LLM screening r\'esum\'es to identify the best candidates for a role. Evaluated on a corpus of 2,483 r\'esum\'es totaling 9.5M tokens with controlled injections, CLASP achieves 95.9% token-level F1 score and 99.3% document-level F1 score on malicious tokens detection. Crucially, the model generalizes to unseen attack patterns: under leave-one-out cross-validation, performance remains high (96.9% document-level F1), while under clustered cross-validation with structurally novel triggers, it maintains useful detection capability (91.6% average document-level F1). Operating independently of any downstream model, CLASP processes 1,032 tokens per second with under 4GB VRAM consumption, potentially making it suitable for real-world deployment as a lightweight front-line defense for SSM-based and hybrid architectures. All code and detailed results are available at https://anonymous.4open.science/r/hispikes-91C0.