AI Navigate

Parallelograms Strike Back: LLMs Generate Better Analogies than People

arXiv cs.CL / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper compares human and LLM-generated four-term word analogies and reports that LLM completions are judged better and align more closely with parallelogram structure in a GloVe embedding space.
  • The LLM advantage arises from greater parallelogram alignment and lower dependence on easily accessible, high-frequency words, not from improved sensitivity to local similarity.
  • Conversely, when restricting to modal (most frequent) responses, the advantage of LLMs disappears, indicating humans can match LLMs on the top responses.
  • The results suggest the parallelogram model remains a reasonable account of word analogy, with LLMs providing more consistent, constraint-satisfying completions.
  • Implications point to AI-assisted analogy generation and cognitive modeling, showing how distributions of completions differ between humans and LLMs.

Abstract

Four-term word analogies (A:B::C:D) are classically modeled geometrically as ''parallelograms,'' yet recent work suggests this model poorly captures how humans produce analogies, with simple local-similarity heuristics often providing a better account (Peterson et al., 2020). But does the parallelogram model fail because it is a bad model of analogical relations, or because people are not very good at generating relation-preserving analogies? We compared human and large language model (LLM) analogy completions on the same set of analogy problems from (Peterson et al., 2020). We find that LLM-generated analogies are reliably judged as better than human-generated ones, and are also more closely aligned with the parallelogram structure in a distributional embedding space (GloVe). Crucially, we show that the improvement over human analogies was driven by greater parallelogram alignment and reduced reliance on accessible words rather than enhanced sensitivity to local similarity. Moreover, the LLM advantage is driven not by uniformly superior responses by LLMs, but by humans producing a long tail of weak completions: when only modal (most frequent) responses by both systems are compared, the LLM advantage disappears. However, greater parallelogram alignment and lower word frequency continue to predict which LLM completions are rated higher than those of humans. Overall, these results suggest that the parallelogram model is not a poor account of word analogy. Rather, humans may often fail to produce completions that satisfy this relational constraint, whereas LLMs do so more consistently.