Harnessing Hyperbolic Geometry for Harmful Prompt Detection and Sanitization

arXiv cs.AI / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper targets safety vulnerabilities in vision-language models (VLMs) where malicious prompts can induce unsafe outputs through shared embedding alignment between text and images.
  • It proposes HyPE, a lightweight anomaly detector that uses hyperbolic geometry to model benign prompts and flag harmful ones as geometric outliers.
  • It adds HyPS, a sanitization step that uses explainable attribution to locate specific harmful words and selectively modify them while preserving the user’s original intent/semantics.
  • Experiments across multiple datasets and adversarial scenarios show HyPE+HyPS outperform prior defenses in both detection accuracy and robustness to embedding-level attacks.
  • The approach is positioned as efficient and interpretable compared with blacklist filters (easily bypassed) and heavier classifier-based systems (costly and fragile).

Abstract

Vision-Language Models (VLMs) have become essential for tasks such as image synthesis, captioning, and retrieval by aligning textual and visual information in a shared embedding space. Yet, this flexibility also makes them vulnerable to malicious prompts designed to produce unsafe content, raising critical safety concerns. Existing defenses either rely on blacklist filters, which are easily circumvented, or on heavy classifier-based systems, both of which are costly and fragile under embedding-level attacks. We address these challenges with two complementary components: Hyperbolic Prompt Espial (HyPE) and Hyperbolic Prompt Sanitization (HyPS). HyPE is a lightweight anomaly detector that leverages the structured geometry of hyperbolic space to model benign prompts and detect harmful ones as outliers. HyPS builds on this detection by applying explainable attribution methods to identify and selectively modify harmful words, neutralizing unsafe intent while preserving the original semantics of user prompts. Through extensive experiments across multiple datasets and adversarial scenarios, we prove that our framework consistently outperforms prior defenses in both detection accuracy and robustness. Together, HyPE and HyPS offer an efficient, interpretable, and resilient approach to safeguarding VLMs against malicious prompt misuse.