Do LLMs Know What Is Private Internally? Probing and Steering Contextual Privacy Norms in Large Language Model Representations

arXiv cs.CL / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines whether LLMs internally represent contextual privacy norms (based on contextual integrity theory) and why they still disclose private information in high-stakes scenarios.
  • It reports that three contextual-integrity parameters—information type, recipient, and transmission principle—appear in activation space as linearly separable and functionally independent directions across multiple models.
  • Despite this internal encoding, the study finds persistent privacy leakage, indicating a mismatch between what the model represents and how it actually behaves.
  • The authors propose “CI-parametric steering,” which makes targeted interventions along each CI dimension to reduce privacy violations more effectively than traditional single-shot (monolithic) steering.
  • Overall, the results suggest contextual privacy failures stem from representation–behavior misalignment rather than an absence of internal awareness of privacy concepts.

Abstract

Large language models (LLMs) are increasingly deployed in high-stakes settings, yet they frequently violate contextual privacy by disclosing private information in situations where humans would exercise discretion. This raises a fundamental question: do LLMs internally encode contextual privacy norms, and if so, why do violations persist? We present the first systematic study of contextual privacy as a structured latent representation in LLMs, grounded in contextual integrity (CI) theory. Probing multiple models, we find that the three norm-determining CI parameters (information type, recipient, and transmission principle) are encoded as linearly separable and functionally independent directions in activation space. Despite this internal structure, models still leak private information in practice, revealing a clear gap between concept representation and model behavior. To bridge this gap, we introduce CI-parametric steering, which independently intervenes along each CI dimension. This structured control reduces privacy violations more effectively and predictably than monolithic steering. Our results demonstrate that contextual privacy failures arise from misalignment between representation and behavior rather than missing awareness, and that leveraging the compositional structure of CI enables more reliable contextual privacy control, shedding light on potential improvement of contextual privacy understanding in LLMs.