Probing the Lack of Stable Internal Beliefs in LLMs

arXiv cs.CL / 3/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies why persona-driven LLMs fail to maintain stable behavioral tendencies across long, multi-turn interactions, focusing on the absence of consistent internal belief representations.
  • It introduces an implicit-consistency test using a 20-question-style riddle game where the model must keep a secretly chosen target while answering yes/no guesses across turns.
  • Evaluation results show that LLMs struggle to preserve an unstated goal over time, with their implicit “goals” shifting between turns.
  • The model’s latent consistency improves only when the selected target is explicitly included in the dialogue context, suggesting current systems need stronger goal anchoring.
  • The findings point to the need for mechanisms that maintain implicit goals across turns to enable more realistic personality modeling for interactive dialogue applications.

Abstract

Persona-driven large language models (LLMs) require consistent behavioral tendencies across interactions to simulate human-like personality traits, such as persistence or reliability. However, current LLMs often lack stable internal representations that anchor their responses over extended dialogues. This work explores whether LLMs can maintain "implicit consistency", defined as persistent adherence to an unstated goal in multi-turn interactions. We designed a 20-question-style riddle game paradigm where an LLM is tasked with secretly selecting a target and responding to users' guesses with "yes/no" answers. Through evaluations, we find that LLMs struggle to preserve latent consistency: their implicit "goals" shift across turns unless explicitly provided their selected target in context. These findings highlight critical limitations in the building of persona-driven LLMs and underscore the need for mechanisms that anchor implicit goals over time, which is a key to realistic personality modeling in interactive applications such as dialogue systems.