Enhancing Structural Mapping with LLM-derived Abstractions for Analogical Reasoning in Narratives

arXiv cs.CL / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the challenge of enabling machines to perform analogical reasoning over narrative structures, noting that existing structural mapping methods rely on pre-extracted entities while LLMs are sensitive to prompt format and surface similarity.
  • It introduces a modular framework called YARN (Yielding Abstractions for Reasoning in Narratives) that uses LLMs to decompose narratives into units, abstract those units at four defined abstraction levels, and then align elements across stories for analogical reasoning.
  • Experiments show that using these LLM-derived abstractions consistently improves performance compared with end-to-end LLM baselines, achieving competitive or better results.
  • Error analysis highlights remaining difficulties, including selecting the right abstraction granularity and capturing implicit causality, and it reports an emerging taxonomy of analogical patterns in narratives.
  • The authors provide open code for YARN to facilitate systematic experimentation and component-level analysis in future research.

Abstract

Analogical reasoning is a key driver of human generalization in problem-solving and argumentation. Yet, analogies between narrative structures remain challenging for machines. Cognitive engines for structural mapping are not directly applicable, as they assume pre-extracted entities, whereas LLMs' performance is sensitive to prompt format and the degree of surface similarity between narratives. This gap motivates a key question: What is the impact of enhancing structural mapping with LLM-derived abstractions on their analogical reasoning ability in narratives? To that end, we propose a modular framework named YARN (Yielding Abstractions for Reasoning in Narratives), which uses LLMs to decompose narratives into units, abstract these units, and then passes them to a mapping component that aligns elements across stories to perform analogical reasoning. We define and operationalize four levels of abstraction that capture both the general meaning of units and their roles in the story, grounded in prior work on framing. Our experiments reveal that abstractions consistently improve model performance, resulting in competitive or better performance than end-to-end LLM baselines. Closer error analysis reveals the remaining challenges in abstraction at the right level, in incorporating implicit causality, and an emerging categorization of analogical patterns in narratives. YARN enables systematic variation of experimental settings to analyze component contributions, and to support future work, we make the code for YARN openly available.