AI Navigate

BenchPreS: A Benchmark for Context-Aware Personalized Preference Selectivity of Persistent-Memory LLMs

arXiv cs.AI / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • BenchPreS introduces a benchmark to evaluate whether memory-based user preferences are applied appropriately across different communication contexts in persistent-memory LLMs.
  • It uses two complementary metrics, Misapplication Rate (MR) and Appropriate Application Rate (AAR), to quantify when preferences are misapplied or correctly suppressed.
  • The study finds frontier LLMs struggle to apply preferences in a context-sensitive manner, with stronger adherence sometimes leading to more over-application.
  • Neither enhanced reasoning capabilities nor prompt-based defenses fully resolve the misalignment, suggesting preferences are treated as globally enforceable rules rather than context-dependent signals.
  • The results indicate a need for improved alignment strategies and normative guidance for personal preferences in memory-enabled LLMs.

Abstract

Large language models (LLMs) increasingly store user preferences in persistent memory to support personalization across interactions. However, in third-party communication settings governed by social and institutional norms, some user preferences may be inappropriate to apply. We introduce BenchPreS, which evaluates whether memory-based user preferences are appropriately applied or suppressed across communication contexts. Using two complementary metrics, Misapplication Rate (MR) and Appropriate Application Rate (AAR), we find even frontier LLMs struggle to apply preferences in a context-sensitive manner. Models with stronger preference adherence exhibit higher rates of over-application, and neither reasoning capability nor prompt-based defenses fully resolve this issue. These results suggest current LLMs treat personalized preferences as globally enforceable rules rather than as context-dependent normative signals.