A Comparative Analysis of LLM Memorization at Statistical and Internal Levels: Cross-Model Commonalities and Model-Specific Signatures

arXiv cs.CL / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper compares memorization behavior across multiple LLM families (Pythia, OpenLLaMa, StarCoder, and OLMo variants) to distinguish what is common vs model-specific, addressing prior work that often studied only one model series.
  • At the statistical level, it finds memorization rate scales log-linearly with model size and that memorized sequences can be further compressed, along with shared frequency/domain distribution patterns.
  • At the internal/representational level, the study shows memorized sequences are more sensitive than certain perturbations, and identifies common decoding mechanisms and key attention heads via middle-layer decoding and attention-head ablation.
  • Despite shared mechanisms, the distribution of important attention heads differs by model family, indicating family-level signatures alongside cross-model commonalities.
  • Overall, the work aims to support a more universal, fundamental understanding of LLM memorization by integrating multiple experimental angles into connected findings.

Abstract

Memorization is a fundamental component of intelligence for both humans and LLMs. However, while LLM performance scales rapidly, our understanding of memorization lags. Due to limited access to the pre-training data of LLMs, most previous studies focus on a single model series, leading to isolated observations among series, making it unclear which findings are general or specific. In this study, we collect multiple model series (Pythia, OpenLLaMa, StarCoder, OLMo1/2/3) and analyze their shared or unique memorization behavior at both the statistical and internal levels, connecting individual observations while showing new findings. At the statistical level, we reveal that the memorization rate scales log-linearly with model size, and memorized sequences can be further compressed. Further analysis demonstrated a shared frequency and domain distribution pattern for memorized sequences. However, different models also show individual features under the above observations. At the internal level, we find that LLMs can remove certain injected perturbations, while memorized sequences are more sensitive. By decoding middle layers and attention head ablation, we revealed the general decoding process and shared important heads for memorization. However, the distribution of those important heads differs between families, showing a unique family-level feature. Through bridging various experiments and revealing new findings, this study paves the way for a universal and fundamental understanding of memorization in LLM.