Reinforcing privacy reasoning in LLMs via normative simulacra from fiction

arXiv cs.LG / 4/24/2026

💬 OpinionModels & Research

Key Points

  • The paper argues that LLM privacy behavior often conflicts with users’ contextual privacy expectations and proposes using Contextual Integrity (CI) to formalize privacy as context-relative information flows.
  • It introduces a method that extracts “normative simulacra” (structured norm and information-flow representations) from fiction novels, then fine-tunes LLMs with supervised learning followed by GRPO reinforcement learning.
  • The training uses a composite reward: programmatic checks (e.g., task clarity, structural completeness, internal consistency, and context identification) plus an LLM judge that verifies whether privacy reasoning is grounded in the held-out normative universe from the source text.
  • To reduce overfitting, it applies per-completion contrastive scoring by comparing each completion against the correct normative universe and a randomly chosen incorrect one, encouraging context conditioning over memorization.
  • Experiments on five CI-aligned benchmarks show that GRPO with fiction-derived normative grounding improves legal compliance and aligns more closely with crowdsourced human privacy expectations than approaches based on SFT alone.

Abstract

Information handling practices of LLM agents are broadly misaligned with the contextual privacy expectations of their users. Contextual Integrity (CI) provides a principled framework, defining privacy as the appropriate flow of information within context-relative norms. However, existing approaches either double inference cost via supervisor-assistant architectures, or fine-tune on narrow task-specific data. We propose extracting normative simulacra (structured representations of norms and information flows) from fiction novels and using them to fine-tune LLMs via supervised learning followed by GRPO reinforcement learning. Our composite reward function combines programmatic signals, including task clarity (subsuming schema validity, construct discrimination, and extraction confidence), structural completeness, internal consistency, and context identification, with an LLM judge that evaluates whether the model's privacy reasoning is grounded in the held-out normative universe of the source text. To mitigate overfitting, we introduce per-completion contrastive scoring: each completion is evaluated against both the correct normative universe and a randomly selected wrong one, teaching the model to condition on context rather than memorize source-specific norms. We evaluate on five CI-aligned benchmarks spanning distinct societal contexts and ablate the contributions of RL and normative grounding. Across seven models, SFT introduces a conservative prior toward restricting information flow, improving recognition of privacy-relevant situations but not the correctness of privacy judgments. GRPO with normative grounding achieves the highest score on a law compliance benchmark and strongest correlation with crowdsourced human privacy expectations, demonstrating that fiction-derived normative simulacra can teach contextual privacy reasoning that transfers to real-world domains.