Parallelograms Strike Back: LLMs Generate Better Analogies than People
arXiv cs.CL / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper compares human and LLM-generated four-term word analogies and reports that LLM completions are judged better and align more closely with parallelogram structure in a GloVe embedding space.
- The LLM advantage arises from greater parallelogram alignment and lower dependence on easily accessible, high-frequency words, not from improved sensitivity to local similarity.
- Conversely, when restricting to modal (most frequent) responses, the advantage of LLMs disappears, indicating humans can match LLMs on the top responses.
- The results suggest the parallelogram model remains a reasonable account of word analogy, with LLMs providing more consistent, constraint-satisfying completions.
- Implications point to AI-assisted analogy generation and cognitive modeling, showing how distributions of completions differ between humans and LLMs.
Related Articles
Day 10: 230 Sessions of Hustle and It Comes Down to One Person Reading a Document
Dev.to

5 Dangerous Lies Behind Viral AI Coding Demos That Break in Production
Dev.to
Two bots, one confused server: what Nimbus revealed about AI agent identity
Dev.to

OpenTelemetry just standardized LLM tracing. Here's what it actually looks like in code.
Dev.to
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark forFinance
Dev.to