[R] Interested in recent research into recall vs recognition in LLMs

Reddit r/MachineLearning / 3/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The post asks whether LLMs can recall factual information more accurately than they can verify facts, including whether verification may outperform recall in specific scenarios like exact quotation checks.
  • It notes that LLMs are often trained to avoid directly quoting potentially copyrighted material, and asks how this constraint affects recall vs verification performance.
  • The author requests existing research that directly compares LLM accuracy across recall and verification tasks, particularly for factual accuracy.
  • Overall, the content is framed as a literature-seeking question about measurement and evaluation of LLM behavior rather than reporting a new finding or system release.

I've casually seen LLMs correctly verify exact quotations that they either couldn't or wouldn't quote directly for me. I'm aware that they're trained to avoid quoting potentially copywritten content, and the implications of that, but it made me wonder a few things:

  1. Can LLMs verify knowledge more (or less) accurately than they can recall knowledge?
    1b. Can LLMs verify more (or less) knowledge accurately than they can recall accurately?
  2. What research exists into LLM accuracy in recalling facts vs verifying facts?
submitted by /u/Acoustic-Blacksmith
[link] [comments]