See No Evil: Semantic Context-Aware Privacy Risk Detection for AR

arXiv cs.CV / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current AR privacy risk frameworks are limited because they do not understand the semantic context of what the AR camera sees.
  • It introduces PrivAR, a semantic context-aware privacy risk detection approach for AR that uses vision-language models with chain-of-thought prompting to infer sensitive information types from visual scene cues.
  • PrivAR goes beyond detection by obfuscating and protecting sensitive textual content while preserving contextual cues needed for the VLM to continue accurate inference.
  • Experiments on a real-world AR dataset report strong performance, including 81.48% accuracy and an F1-score of 84.62%, along with reduced privacy leakage to 17.58%.
  • The work also explores contextually informed warning interfaces and reports user-study findings to guide more effective privacy-aware AR UX design.

Abstract

Augmented reality (AR) systems pose unique privacy risks due to their continuous capture of visual data. Existing AR privacy frameworks lack semantic understanding of visual content, limiting their effectiveness in detecting context-dependent privacy risks. We propose PrivAR, which leverages vision language models (VLMs) with chain-of-thought prompting for contextual privacy risk detection in AR environments. PrivAR uses visual scene cues to infer potential sensitive information types, such as identifying password notes in office environments through contextual reasoning. PrivAR detects and obfuscates textual content, preventing exposure of sensitive information while preserving contextual cues necessary for VLM inference. Additionally, we investigate contextually-informed warning interfaces to enhance user privacy awareness. Experiments on a real-world AR dataset show that PrivAR achieves superior accuracy (81.48%) and F1-score (84.62%) compared to baselines, while reducing privacy leakage rate to 17.58%. User studies evaluating contextually-informed warning interfaces provide insights into effective privacy-aware AR design.