ContextLens: Modeling Imperfect Privacy and Safety Context for Legal Compliance

arXiv cs.CL / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ContextLens, a semi-rule-based framework that uses LLMs to interpret imperfect, ambiguous legal/privacy/safety contexts for compliance purposes rather than assuming complete information.
  • ContextLens grounds the input context in a legal domain by prompting LLMs with crafted questions covering applicability, general principles, and detailed legal provisions, while explicitly flagging known versus unknown factors.
  • Experiments on GDPR and the EU AI Act compliance benchmarks show ContextLens can significantly improve LLM compliance assessments and outperform prior baselines without additional training.
  • The approach also helps identify ambiguous or missing contextual factors that could affect compliance decisions in real-world scenarios.

Abstract

Individuals' concerns about data privacy and AI safety are highly contextualized and extend beyond sensitive patterns. Addressing these issues requires reasoning about the context to identify and mitigate potential risks. Though researchers have widely explored using large language models (LLMs) as evaluators for contextualized safety and privacy assessments, these efforts typically assume the availability of complete and clear context, whereas real-world contexts tend to be ambiguous and incomplete. In this paper, we propose ContextLens, a semi-rule-based framework that leverages LLMs to ground the input context in the legal domain and explicitly identify both known and unknown factors for legal compliance. Instead of directly assessing safety outcomes, our ContextLens instructs LLMs to answer a set of crafted questions that span over applicability, general principles and detailed provisions to assess compliance with pre-defined priorities and rules. We conduct extensive experiments on existing compliance benchmarks that cover the General Data Protection Regulation (GDPR) and the EU AI Act. The results suggest that our ContextLens can significantly improve LLMs' compliance assessment and surpass existing baselines without any training. Additionally, our ContextLens can further identify the ambiguous and missing factors.