AI Navigate

Markovian Generation Chains in Large Language Models

arXiv cs.AI / 3/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper defines iterative inference by LLMs as Markovian generation chains, where each step uses a fixed prompt template and the previous output without any memory.
  • Through experiments on iterative rephrasing and round-trip translation, it shows that outputs can converge to a small recurrent set or continue to produce novel sentences over a finite horizon.
  • A sentence-level Markov chain model and analysis of simulated data reveal that diversity can either increase or decrease based on factors like the temperature parameter and the initial input.
  • The results provide insights into the dynamics of iterative LLM inference and their implications for multi-agent LLM systems.

Abstract

The widespread use of large language models (LLMs) raises an important question: how do texts evolve when they are repeatedly processed by LLMs? In this paper, we define this iterative inference process as Markovian generation chains, where each step takes a specific prompt template and the previous output as input, without including any prior memory. In iterative rephrasing and round-trip translation experiments, the output either converges to a small recurrent set or continues to produce novel sentences over a finite horizon. Through sentence-level Markov chain modeling and analysis of simulated data, we show that iterative process can either increase or reduce sentence diversity depending on factors such as the temperature parameter and the initial input sentence. These results offer valuable insights into the dynamics of iterative LLM inference and their implications for multi-agent LLM systems.