The Last Fingerprint: How Markdown Training Shapes LLM Prose

arXiv cs.CL / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLM em-dash “overuse” is not just a stylistic quirk, but a form of markdown leaking into prose from markdown-saturated training data.
  • It proposes a mechanistic genealogy linking dataset structure, internalization of formatting conventions, the em dash’s dual role in markdown/prose, and how post-training amplifies the effect.
  • A two-condition suppression experiment across 12 models from multiple providers finds that when instructed to avoid markdown, most overt markdown features disappear while em dashes largely persist.
  • Em-dash frequency and suppression resistance are shown to vary by model, ranging from zero in Meta Llama models to substantially higher rates in others, and serving as a diagnostic signature of fine-tuning methodology.
  • Additional suppression gradients and base-versus-instruct comparisons suggest the tendency can exist pre-RLHF and may not be fully removable even with explicit prohibition prompts.

Abstract

Large language models produce em dashes at varying rates, and the observation that some models "overuse" them has become one of the most widely discussed markers of AI-generated text. Yet no mechanistic account of this pattern exists, and the parallel observation that LLMs default to markdown-formatted output has never been connected to it. We propose that the em dash is markdown leaking into prose -- the smallest surviving unit of the structural orientation that LLMs acquire from markdown-saturated training corpora. We present a five-step genealogy connecting training data composition, structural internalization, the dual-register status of the em dash, and post-training amplification. We test this with a two-condition suppression experiment across twelve models from five providers (Anthropic, OpenAI, Meta, Google, DeepSeek): when models are instructed to avoid markdown formatting, overt features (headers, bullets, bold) are eliminated or nearly eliminated, but em dashes persist -- except in Meta's Llama models, which produce none at all. Em dash frequency and suppression resistance vary from 0.0 per 1,000 words (Llama) to 9.1 (GPT-4.1 under suppression), functioning as a signature of the specific fine-tuning procedure applied. A three-condition suppression gradient shows that even explicit em dash prohibition fails to eliminate the artifact in some models, and a base-vs-instruct comparison confirms that the latent tendency exists pre-RLHF. These findings connect two previously isolated online discourses and reframe em dash frequency as a diagnostic of fine-tuning methodology rather than a stylistic defect.