The Surprising Universality of LLM Outputs: A Real-Time Verification Primitive

arXiv cs.CL / 4/29/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper identifies a statistical regularity across frontier LLM outputs where token rank-frequency distributions converge to a shared two-parameter Mandelbrot ranking form across multiple vendors, model sizes, and held-out domains.
  • It reports a fast, CPU-only verification/scoring primitive that operates at about 2.6 microseconds per token and can be up to five orders of magnitude faster than existing sampling-based detectors.
  • Using this shared distribution, the work proposes statistical model fingerprinting to verify whether text matches a claimed LLM family without cryptographic watermarks or access to model internals.
  • It also provides a model-agnostic reference distribution for black-box output assessment, including a rank-only mode for closed APIs, and positions it as a first-pass triage component rather than a replacement for stronger verifiers.
  • Pilot evaluations (e.g., FRANK, TruthfulQA, HaluEval) suggest the approach helps detect lexical anomalies and unsupported entities, but it struggles with reasoning errors that require domain-appropriate vocabulary.

Abstract

We report a striking statistical regularity in frontier LLM outputs that enables a CPU-only scoring primitive running at 2.6 microseconds per token, with estimated latency up to 100,000\times (five orders of magnitude) below existing sampling-based detectors. Across six contemporary models from five independent vendors, two generation sizes, and five held-out domains, token rank-frequency distributions converge to the same two-parameter Mandelbrot ranking distribution, with 34 of 36 model-by-domain fits exceeding R^{2} = 0.94 and 35 of 36 favoring Mandelbrot over Zipf by AIC. The shared family does not collapse the models into statistical duplicates. Fitted Mandelbrot parameters remain cleanly separable between models: the cross-model spread in q (1.63 to 3.69) exceeds its per-model bootstrap standard deviation (0.03 to 0.10) by more than an order of magnitude, yielding tens of standard deviations of separation per few thousand output tokens. Two capabilities follow. First, statistical model fingerprinting: text from a vendor-delivered LLM can be tested against its claimed model family without cryptographic watermarks or access to model internals, supporting provenance verification and silent-substitution audits. Second, a model-agnostic reference distribution for black-box output assessment, from which we derive a single-pass scoring primitive that composes with model log probabilities when available and degrades to a rank-only mode usable on closed APIs. Pilot results on FRANK, TruthfulQA, and HaluEval map where the primitive helps (lexical anomalies, unsupported entities) and where it structurally cannot (reasoning errors in domain-appropriate vocabulary). We position the primitive as a first-pass triage layer in compound evaluation stacks, not as a replacement for sampling-based or source-conditioned verifiers.