Structural Ranking of the Cognitive Plausibility of Computational Models of Analogy and Metaphors with the Minimal Cognitive Grid

arXiv cs.AI / 5/5/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a formal, quantitative way to use the Minimal Cognitive Grid (MCG) framework to evaluate how cognitively plausible computational models of analogy and metaphor are.
  • It applies MCG to several leading approaches, including Structure-Mapping Engine (SME), CogSketch, METCL, and Large Language Models (LLMs).
  • The assessment is based on three key dimensions—Functional/Structural Ratio, Generality, and Performance Match—to measure alignment with established cognitive theories.
  • The authors argue that the resulting scores enable consistent, generalizable comparisons across different modeling systems in terms of cognitive plausibility.

Abstract

In this paper, we employ the Minimal Cognitive Grid (MCG), a framework created to evaluate the cognitive plausibility of artificial systems, to offer a systematic assessment of leading computational models of analogy and metaphor, including the Structure-Mapping Engine (SME), CogSketch, METCL, and Large Language Models (LLMs). We present a formal and quantitative operationalization of the MCG framework and, through the analysis of its three main dimensions (Functional/Structural Ratio, Generality, and Performance Match), examine how well each system aligns with standard cognitive theories of the modeled phenomena, thus allowing for comparison of the models with respect to their cognitive plausibility, according to consistent and generalizable mathematical criteria.