When Models Know More Than They Say: Probing Analogical Reasoning in LLMs
arXiv cs.CL / 4/7/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates how well large language models support analogical reasoning, especially when the relevant analogy depends on latent information rather than obvious surface cues.
- It compares internal “probed” representations against model performance when the same capability is assessed via prompting to detect narrative analogies.
- The results show an asymmetry: probing works much better than prompting for rhetorical analogies in open-source models, while both approaches perform similarly low for narrative analogies.
- The findings suggest that the link between what models represent internally and what they reveal via prompting is task-dependent, indicating limits in prompting’s ability to access the available information.
- Overall, the work points to shortcomings in abstraction and generalization for analogical reasoning in certain settings, highlighting differences between representational competence and behavioral accessibility.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to