Language Model Maps for Prompt-Response Distributions via Log-Likelihood Vectors
arXiv cs.CL / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes representing language models by log-likelihood vectors over prompt-response pairs to compare their conditional distributions.
- It shows that distances between models in this space approximate the KL divergence between the corresponding conditional distributions.
- Experiments on a large collection of publicly available language models demonstrate that the maps reveal meaningful global structure, relate to model attributes and task performance.
- The approach captures systematic shifts induced by prompt modifications and shows approximate additive compositionality, enabling prediction of composite prompt effects.
- It introduces PMI vectors to reduce the influence of unconditional distributions, sometimes better reflecting training-data differences and aiding analysis of input-dependent model behavior.
Related Articles
Is AI becoming a bubble, and could it end like the dot-com crash?
Reddit r/artificial

Externalizing State
Dev.to

I made a 'benchmark' where LLMs write code controlling units in a 1v1 RTS game.
Dev.to

My AI Does Not Have a Clock
Dev.to
How to settle on a coding LLM ? What parameters to watch out for ?
Reddit r/LocalLLaMA