In-Context Examples Suppress Scientific Knowledge Recall in LLMs
arXiv cs.AI / 5/1/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study finds that adding in-context examples can suppress LLMs’ ability to recall and use scientific knowledge during latent-structure recovery tasks.
- Even when the in-context examples are generated from the same underlying formulas the model was pretrained on, the model shifts computation toward empirical pattern fitting rather than knowledge-driven derivation.
- Across 60 tasks in five scientific domains, 6,000 trials, and four different models, the “knowledge displacement” effect is consistent in direction.
- The impact on accuracy varies depending on how the displaced (knowledge-based) strategy compares to the replacement (example-based) strategy, which can worsen, not change, or sometimes appear to improve results.
- For practitioners using LLMs in scientific settings, the work suggests a cautionary approach: in-context examples may undermine the very domain knowledge they are meant to reinforce.
Related Articles
Every handle invocation on BizNode gets a WFID — a universal transaction reference for accountability. Full audit trail,...
Dev.to
I deployed AI agents across AWS, GCP, and Azure without a VPN. Here is how it works.
Dev.to
Panduan Lengkap TestSprite MCP Server — Dokumentasi Getting Started dalam Bahasa Indonesia
Dev.to
AI made learning fun again
Dev.to
MCP, Skills, AI Agents, and New Models: The New Stack for Software Development
Dev.to