Language Model Maps for Prompt-Response Distributions via Log-Likelihood Vectors
arXiv cs.CL / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes representing language models by log-likelihood vectors over prompt-response pairs to compare their conditional distributions.
- It shows that distances between models in this space approximate the KL divergence between the corresponding conditional distributions.
- Experiments on a large collection of publicly available language models demonstrate that the maps reveal meaningful global structure, relate to model attributes and task performance.
- The approach captures systematic shifts induced by prompt modifications and shows approximate additive compositionality, enabling prediction of composite prompt effects.
- It introduces PMI vectors to reduce the influence of unconditional distributions, sometimes better reflecting training-data differences and aiding analysis of input-dependent model behavior.
Related Articles

Attacks On Data Centers, Qwen3.5 In All Sizes, DeepSeek’s Huawei Play, Apple’s Multimodal Tokenizer
The Batch

Your AI generated code is "almost right", and that is actually WORSE than it being "wrong".
Dev.to

Lessons from Academic Plagiarism Tools for SaaS Product Development
Dev.to

**Core Allocation Optimization for Energy‑Efficient Multi‑Core Scheduling in ARINC650 Systems**
Dev.to

KI in der amtlichen Recherche beim DPMA: Was Patentanwälte bei Neuanmeldungen jetzt beachten sollten (Stand: März 2026)
Dev.to