Facet-Level Persona Control by Trait-Activated Routing with Contrastive SAE for Role-Playing LLMs

arXiv cs.CL / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a training-efficient method for role-playing agent persona control using contrastively trained Sparse AutoEncoders (SAEs) that learn facet-level personality vectors aligned to the Big Five 30-facet model.
  • Instead of relying on prompt/RAG signals that can dilute over long dialogues or requiring persona-labeled supervised fine-tuning, it introduces trait-activated routing to dynamically select the relevant personality facets during generation.
  • The authors construct a leakage-controlled dataset of 15,000 samples with balanced supervision across facets, enabling the SAE to learn interpretable control vectors.
  • Experiments on LLMs indicate improved and more stable character fidelity and consistent output quality versus Contrastive Activation Addition (CAA) and prompt-only baselines, with SAE+Prompt performing best.
  • The work provides a dataset publicly on GitHub, supporting reproducibility and further research into controllable persona steering for RPAs.

Abstract

Personality control in Role-Playing Agents (RPAs) is commonly achieved via training-free methods that inject persona descriptions and memory through prompts or retrieval-augmented generation, or via supervised fine-tuning (SFT) on persona-specific corpora. While SFT can be effective, it requires persona-labeled data and retraining for new roles, limiting flexibility. In contrast, prompt- and RAG-based signals are easy to apply but can be diluted in long dialogues, leading to drifting and sometimes inconsistent persona behavior. To address this, we propose a contrastive Sparse AutoEncoder (SAE) framework that learns facet-level personality control vectors aligned with the Big Five 30-facet model. A new 15,000-sample leakage-controlled corpus is constructed to provide balanced supervision for each facet. The learned vectors are integrated into the model's residual space and dynamically selected by a trait-activated routing module, enabling precise and interpretable personality steering. Experiments on Large Language Models (LLMs) show that the proposed method maintains stable character fidelity and output quality across contextualized settings, outperforming Contrastive Activation Addition (CAA) and prompt-only baselines. The combined SAE+Prompt configuration achieves the best overall performance, confirming that contrastively trained latent vectors can enhance persona control while preserving dialogue coherence. Dataset is available at: https://github.com/lunat5078/BigFive-Personality-Facets-Dataset