AI Navigate

Expert Personas Improve LLM Alignment but Damage Accuracy: Bootstrapping Intent-Based Persona Routing with PRISM

arXiv cs.AI / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Expert personas can steer LLMs toward domain-specific tones, but their overall utility has shown mixed results across tasks and domains.
  • This work introduces PRISM, a bootstrapping pipeline that routes an intent-conditioned expert persona into a gated LoRA adapter via self-distillation requiring no external data.
  • PRISM purportedly enhances human preference alignment and safety in generative tasks while maintaining accuracy on discriminative tasks, across instruction-tuned and reasoning LLMs, with minimal memory and compute overhead.
  • The study analyzes how model optimization, task type, prompt length, and placement affect persona effectiveness and identifies conditions under which expert personas succeed or fail.
  • By enabling intent-based persona routing, PRISM aims to exploit benefits of persona prompting while mitigating potential harms.

Abstract

Persona prompting can steer LLM generation towards a domain-specific tone and pattern. This behavior enables use cases in multi-agent systems where diverse interactions are crucial and human-centered tasks require high-level human alignment. Prior works provide mixed opinions on their utility: some report performance gains when using expert personas for certain domains and their contribution to data diversity in synthetic data creation, while others find near-zero or negative impact on general utility. To fully leverage the benefits of the LLM persona and avoid its harmfulness, a more comprehensive investigation of the mechanism is crucial. In this work, we study how model optimization, task type, prompt length, and placement can impact expert persona effectiveness across instruction-tuned and reasoning LLMs, and provide insight into conditions under which expert personas fail and succeed. Based on our findings, we developed a pipeline to fully leverage the benefits of an expert persona, named PRISM (Persona Routing via Intent-based Self-Modeling), which self-distills an intent-conditioned expert persona into a gated LoRA adapter through a bootstrapping process that requires no external data, models, or knowledge. PRISM enhances human preference and safety alignment on generative tasks while maintaining accuracy on discriminative tasks across all models, with minimal memory and computing overhead.